Clinics call on the U.S. government to take urgent steps to address insecurity and gang violence in Haiti

HUMAN RIGHTS MOVEMENT

Clinics call on the U.S. government to take urgent steps to address insecurity and gang violence in Haiti

The NYU Global Justice Clinic, the International Human Rights Clinic at Harvard Law School, and the Lowenstein International Human Rights Clinic at Yale Law School call on the U.S. government to take urgent steps to address insecurity and gang violence in Haiti.  The clinics are deeply concerned that the U.S. government continues to support de facto Prime Minister Ariel Henry, despite strong evidence of his government’s involvement in broadening violence.  The Clinics are alarmed about recent and serious threats against human rights defenders, particularly concerning staff of the Réseau National de Défense des Droits Humains (RNDDH). The status quo puts human rights defenders—and all Haitian people—at risk.  The clinics are in close contact with Haitian civil society, and stress that recent U.S. legislation, the Haiti Development, Accountability, and Institutional Transparency Act and the Global Fragility Act, recognizes the right of Haitian people to self-determination. Together, the clinics urge the U.S. government to:

  1. Support Haitian-led investigation of and accountability for human rights abuses
  2. Ensure transparency in the U.S. investigation of the murder of former President Jovenel Moïse
  3. Take concrete, effective steps to enforce U.S. laws on arms trafficking
  4. Shift support from Dr. Henry towards an inclusive and Haitian-led political process.

June 27, 2022. Statements of the Global Justice Clinic do not purport to represent the views of NYU or the Center, if any.

The World Bank and co. may be paving a ‘Digital Road to Hell’ with support for dangerous digital ID

TECHNOLOGY & HUMAN RIGHTS

The World Bank and co. may be paving a ‘Digital Road to Hell’ with support for dangerous digital ID

Global actors, led by the World Bank, are energetically promoting biometric and other digital ID systems that are increasingly linked to large-scale human rights violations, especially in the Global South. A report by researchers at New York University warns that these systems, promoted in the name of development and inclusion, might be achieving neither. Rather than the equitable digital future envisioned by the World Bank and its Identification for Development (ID4D) Initiative, the report argues that “despite undoubted good intentions on the part of some, [these systems] may well be paving a digital road to hell.”

Report cover: Paving a digital road to hell?

The report, at over 100 pages, is intended to be a “carefully researched primer as well as a call to action to all of those with an interest in safeguarding human rights to set their gaze more firmly on the multidimensional dangers associated with digital ID systems.” Governments around the world have been investing heavily in digital identification systems, often with biometric components (digital ID). The rapid proliferation of such systems is driven by a new development consensus, packaged and promoted by key global actors like the World Bank, but also by governments, foundations, vendors and consulting firms. This new ‘manufactured consensus’ holds that digital ID can contribute to inclusive and sustainable development—and is even a prerequisite for the realization of human rights.

Drawing inspiration from the Aadhaar system in India, the dangerous digital ID model that is being promoted prioritizes what the primer refers to as an ‘economic identity’.  The goal of such systems is primarily to establish ‘uniqueness’ of individuals, commonly with the help of biometric technologies. The ultimate objective of such digital ID systems is to facilitate economic transactions and private sector service delivery while also bringing new, poorer, individuals into formal economies and ‘unlocking’ their behavioral data. As the Executive Chairman of the influential ID4Africa, a platform where African governments and major companies in the digital ID market meet, put it at the start of its 2022 Annual Meeting earlier this week, digital ID is no longer about identity alone but “enables and interacts with authentication platforms, payments systems, digital signatures, data sharing, KYC systems, consent management and sectoral delivery platforms.”

Unlike ‘traditional systems’ of civil registration, such as birth registration, this new model of economic identity commonly sidesteps difficult questions about the legal status of those it registers and the rights associated with that status. The promises of inclusion and flourishing digital economies might appear attractive on paper, but digital ID systems have consistently failed to deliver on these promises in real world situations, especially for the most marginalized. In fact, evidence is emerging from many countries, most notably the mega digital ID project Aadhaar in India, of the severe and large-scale human rights violations linked to this model. These systems may in fact exacerbate pre-existing forms of exclusion and discrimination in public and private services. The use of new technologies may furthermore lead to novel forms of harm, including biometric exclusion, discrimination, and the many harms associated with “surveillance capitalism.”

Meanwhile, the benefits of digital ID remain ill-defined and poorly documented. From what evidence does exist, it seems that those who stand to benefit most may not be those “left behind”,but instead a small group of companies and governments. After all, where digital ID systems have tended to excel is in generating lucrative contracts for biometrics companies and enhancing the surveillance and migration-control capabilities of governments.

With such powerful backing, digital ID has taken on the guise of an unstoppable juggernaut and inevitable hallmark of modernity and development in the 21st century, and the dissenting voices of civil society have been written off as Luddites and barriers to progress. Nevertheless, the report calls on human rights organizations, other civil society organizations, and advocates who may have been on the sidelines of these debates to get more involved.  The actual and potential human rights violations arising from this model of digital ID can be severe and potentially irreversible. The human rights community can play an important role in ensuring that such transformational changes are not rushed and are based on serious evidence and analysis. It can also ensure that there is sufficient public debate, with full transparency and involving all relevant stakeholders, not in the least the most marginalized and most affected individuals. Where necessary to safeguard human rights, such dangerous digital ID systems should be stopped altogether.

This post was originally published as a press release on June 17, 2022.

Sorting in Place of Solutions for Homeless Populations: How Federal Directives Prioritize Data Over Services

TECHNOLOGY & HUMAN RIGHTS

Sorting in Place of Solutions for Homeless Populations: How Federal Directives Prioritize Data Over Services

National data collection and service prioritization were supposed to make homeless services more equitable and efficient. Instead, they have created more risks and bureaucratic burdens for homeless individuals and homeless service organizations.

While serving as an AmeriCorps VISTA member supporting the IT and holistic defense teams at a California public defender, much of my time was spent navigating the data bureaucracy that now weighs down social service providers across the country. In particular, I helped social workers and other staff members use tools like the Vulnerability Index – Service Prioritization Decision Assistance Tool (VI-SPDAT) and a Homeless Management Information System (HMIS). While these tools were ostensibly designed to improve care for homeless and housing insecure people, all too often they did the opposite.

An HMIS is a localized information network and database used to collect client-level data and data on the provision of housing and services to homeless or at-risk persons. In 2011, Congress passed the HEARTH Act, mandating the use of HMIS by communities in order to receive federal funding. HMIS demands coordinated entry, a process by which certain types of data are cataloged and clients are ranked according to their perceived need. One of the most common tools for coordinated entry—and the one used by the social workers I worked with—is VI-SPDAT. VI-SPDAT is effectively a questionnaire which involves a battery of highly invasive questions which seek to determine the level of need of the homeless or housing insecure individual to whom it is administered.

These tools have been touted as game-changers, but while homelessness across the country, and especially in California, continued to decrease modestly in the years immediately following the enactment of the HEARTH act, it began to increase again in 2019 and sharply increased in 2020, even before the onset of the COVID-19 pandemic. This is not to suggest a causal link; indeed, the evidence suggests that factors such as rising housing costs and a worsening methamphetamine epidemic are at the heart of rising homelessness. But there is little evidence that intrusive tools like VI-SPDAT alleviate these problems.

Indeed, these tools have themselves been creating problems for homeless persons and social workers alike. There have been harsh criticisms from scholars like Virginia Eubanks about the accuracy and usefulness of VI-SPDAT. It has been found to produce unreliable and racially biased results. Rather than decreasing bias as it purports to do, VI-SPDAT has baked bias into its algorithms, providing a veneer of scientific objectivity for government officials to hide behind.

But, even if these tools were to be made more reliable and less biased,  they would nonetheless cause harm and stigmatization. Homeless individuals and social workers alike report finding the assessment dehumanizing and distressing. For homeless individuals, it can also feel deeply risky. Those who don’t score high enough on the assessment are often denied housing and assistance altogether. Those who score too high run the risk of involuntary institutionalization.

Meanwhile, these tools place significant burdens on social workers. To receive federal funding, organizations must provide not only an intense amount of highly intimate information about homeless persons and their life histories, but also a minute accounting of every interaction between the social worker and the client. One social worker would frequently work with clients from 9-5, go home to make dinner for her children, and then work into the wee hours of the night attempting to log all of her data requirements.

I once sat through a 45-minute video call with a veteran social worker who broke down into tears worried that the grant funding her position might be taken away if her record keeping was less than perfect, but the design of the HMIS made it virtually impossible to be completely honest. The system anticipated that four-hour client interactions could easily be broken down into distinct chunks—discussed x problem from 4:15 to 4:30, y problem from 4:30 to 4:45, and so on. Of course, anyone who has ever had a conversation with another human being, let alone a human being with mental disabilities or substance use problems, knows that interactions are rarely so tidy and linear.

While this data is claimed to be kept very secure, in reality, hundreds of people in dozens of organizations typically have access to any given HMIS. There are guidelines in place to protect the data, but there is minimal monitoring to ensure that these guidelines are being followed, and many users found them very difficult to follow while working from home during the pandemic. I heard multiple stories of police or prosecutors improperly accessing information from HMIS. Clients can request to have their information removed from the system, but the process for doing so is rarely made clear to them, nor is this process clear even for the social workers processing the data.

After years of criticism, OrgCode—the group which develops VI-SPDAT—announced in 2021 that it would no longer be pushing VI-SPDAT updates, and as of 2022 it is no longer providing support for the current iteration of VI-SPDAT. While this is a commendable move from OrgCode, stakeholders in homeless services must acknowledge the larger failures of HMIS and coordinate entry more generally. Many of the other tools used to perform coordinated entry have similar problems to VI-SPDAT, in part because coordinated entry in effect requires this intrusive data collection about highly personal issues to determine needs and rank clients accordingly. The problems are baked into the data requirements of coordinated entry itself.

The answer to this problem cannot be to completely do away with any classification tools for housing insecure individuals, because understanding the scope and demographics of homelessness is important in tackling it. But clearly a drastic overhaul of these systems is needed to make sure that they are efficient, noninvasive, and accurate. Above all, it is crucial to remember that tools for sorting homeless individuals are only useful to the extent that they ultimately provide better access to the services that actually alleviate homelessness, like affordable housing, mental health treatment, and addiction support. Demanding that beleaguered social service providers prioritize data collection over services, all while using intrusive, racially biased, and dehumanizing tools, will only worsen an intensifying crisis.

May 17, 2022. Batya Kemper, J.D. program, NYU School of Law.

Risk Scoring Children in Chile

TECHNOLOGY & HUMAN RIGHTS

Risk Scoring Children in Chile

On March 30, 2022, Christiaan van Veen and Victoria Adelmant hosted the eleventh event in our “Transformer States” interview series on digital government and human rights. In conversation with human rights expert and activist Paz Peña, we examined the implications of Chile’s “Childhood Alert System,” an “early warning” mechanism which assigns risk scores to children based on their calculated probability of facing various harms. This blog picks up on the themes of the conversation. The video recording and additional readings can be found below.

The deaths of over a thousand children in privatized care homes in Chile between 2005 and 2016 have, in recent years, pushed the issue of child protection high onto the political agenda. The country’s limited legal and institutional protections for children have been consistently critiqued in the past decade, and calls for more state intervention, to reverse the legacies of Pinochet-era commitments to “hands-off” government, have been intensifying. On his first day in office in 2018, former president Sebastián Piñera promised to significantly strengthen and institutionalize state protections for children. He launched a National Agreement for Childhood; established local “childhood offices” and an Undersecretariat for Children; a law guaranteeing children’s rights was passed; and the Sistema Alerta Niñez (“Childhood Alert System”) was developed. This system uses predictive modelling software to calculate children’s likelihood of facing harm or abuse, dropping out of school, and other such risks.

Predictive modelling calculates the probabilities of certain outcomes by identifying patterns within datasets. It operates through a logic of correlation: where persons with certain characteristics experienced harm in the past, those with similar characteristics are likely to experience harm in the future. Developed jointly by researchers at Auckland University of Technology’s Centre for Social Data Analytics and the Universidad Adolfo Ibáñez’s GobLab, the Childhood Alert predictive modelling software analyzes existing government databases to identify combinations of individual and social factors which are correlated with harmful outcomes, and flags children accordingly. The aim is to “prioritize minors [and] achieve greater efficiency in the intervention.”

A skewed picture of risk

But the Childhood Alert System is fundamentally skewed. The tool analyzes databases about the beneficiaries of public programs and services, such as Chile’s Social Information Registry. It thereby only examines a subset of the population of children—those whose families are accessing public programs. Families in higher socioeconomic brackets—who do not receive social assistance and thus do not appear in these databases—are already excluded from the picture, despite the fact that children from these groups can also face abuse. Indeed, the Childhood Alert system’s developers themselves acknowledged in their final report that the tool has “reduced capability for identifying children at high risk from a higher socioeconomic level” due to the nature of the databases analyzed. The tool, from its inception and by its very design, is limited in scope and completely ignores wealthier groups.

The analysis then proceeds on a problematic basis, whereby socioeconomic disadvantage is equated with risk. Selected variables include: social programs of which the child’s family are beneficiaries; families’ educational backgrounds; socioeconomic measures from Chile’s Social Registry of Households; and a whole host of geographical variables, including the number of burglaries, percentage of single parent households, and unemployment rate in the child’s neighborhood. Each of these variables are direct measures of poverty. Through this design, children in poorer areas can be expected to receive higher risk scores. This is likely to perpetuate over-intervention in certain neighborhoods.

Economic and social inequalities, including significant regional disparities in living conditions, persist in Chile. As elsewhere, poverty and marginalization do not fall evenly. Women, migrants, those living in rural areas, and indigenous groups are more likely to live in poverty—those from indigenous groups have Chile’s highest poverty rates. As the Alert System is skewed towards low-income populations, it will likely disproportionately flag children from indigenous groups thus raising issues of racial and ethnic bias. Furthermore, the datasets used will also reflect inequalities and biases. Public datasets about families’ previous interactions with child protective services, for example, are populated through social workers’ inputs. Biases against indigenous families, young mothers, or migrants—reflected through disproportionate investigations or stereotyped judgments about parenting—will be fed into the database.

The developers of this predictive tool wrote in their evaluation that, while concerns about racial disparities “have been expressed in the context of countries like the United States, where there are greater challenges related to racism. In the local Chilean context, we frankly don’t see similar concerns about race.” As Paz Peña points out, this dismissal is “difficult to understand” in light of the evidence of racism and racialized poverty in Chile.

Predictive systems such as these are premised on linking individuals’ characteristics and circumstances with the incidence of harm. As Abeba Birhane puts it, such approaches by their nature “force determinability [and] create a world that resembles the past” through reinforcing stereotypes, because they attach risk factors to certain individual traits.

The global context

These issues of bias, disproportionality, and determinacy in predictive child welfare tools have already been raised in other countries. Public outcry, ethical concerns, and evidence that these tools simply do not work as intended, have led many such systems to be scrapped. In the United Kingdom, a local authority’s Early Help Profiling System which “translates data on families into risk profiles [of] the 20 families in most urgent need” was abandoned after it had “not realized the expected benefits.” The U.S. state of Illinois’ child welfare agency strongly criticized and scrapped its predictive tool which had flagged hundreds of children as 100% likely to be injured while failing to flag any of the children who did tragically die from mistreatment. And in New Zealand, the Social Development Minister prevented the deployment of a predictive tool on ethical grounds, purportedly noting: “These are children, not lab rats.”

But while predictive tools are being scrapped on grounds of ethics and ineffectiveness in certain contexts, these same systems are spreading across the Global South. Indeed, the Chilean case demonstrates this trend especially clearly. The team of researchers who developed Chile’s Childhood Alert System is the very same team whose modelling was halted by the New Zealand government due to ethical questions, and whose predictive tool for the U.S. state of Pennsylvania was the subject of high-profile and powerful critique by many actors including Virginia Eubanks in her 2018 book Automating Inequality.

As Paz Peña noted, it should come as no surprise that systems which are increasingly deemed too harmful in some Global North contexts are proliferating in the Global South. These spaces are often seen as an “easier target,” with lower chances of backlash than places like New Zealand or the United States. In Chile, weaker institutions resulting from the legacies of military dictatorship and the staunch commitment to a “subsidiary” (streamlined, outsourced, neoliberal) state may be deemed to provide more fertile ground for such systems. Indeed, the tool’s developers wrote in a report that achieving acceptance of the system in Chile would be “simpler as it is the citizens’ custom to have their data processed to stratify their socioeconomic status for the purpose of targeting social benefits.”

This highlights the indispensability of international comparison, cooperation, and solidarity. Those of us working in this space must pay close attention to developments around the world as these systems continue to be hawked at breakneck speed. Identifying parallels, sharing information, and collaborating across constituencies is vital to support the organizations and activists who are working to raise awareness of these systems.

April 20, 2022. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

“Killing two birds with one stone?” The Cashless COVID Welfare Payments Aimed at Boosting Consumption

TECHNOLOGY & HUMAN RIGHTS

“Killing two birds with one stone?” The Cashless COVID Welfare Payments Aimed at Boosting Consumption

In launching its COVID-19 relief payments scheme, the South Korean government had two goals: providing a safety net for its citizens and boosting consumption for the economy. It therefore provided cashless payments, issuing credit card points rather than cash. However, this had serious implications for the vulnerable.

In May 2020, South Korea’s government distributed its COVID-19 emergency relief payments to all households through cashless channels. Recipients predominantly received points on credit cards rather than cash transfers. From the outset, the government stated explicitly that this universal transfer scheme had two goals: it was not only intended to mitigate the devastating impacts of the pandemic on people’s livelihoods, but also explicitly aimed at simultaneously boosting consumption in the South Korean economy. Providing cash would not necessarily boost consumption as it could be placed in savings accounts. Therefore, credit card points were offered instead to require recipients to spend the relief. But in trying to “kill two birds with one stone” by promoting consumption through the relief program, the government jeopardized the welfare aim of this program.

Once the payouts began, the government boasted that the delivery of the relief funds was timely and efficient. The relief program had been launched based on business agreements with credit card companies for “rapid and smooth” payment, and indeed, it was true that the card-based channel enabled distribution which was much faster than in other countries. Although “offline” applications for the relief program could be made in-person at banks, the scheme was designed around the submission of applications through credit-card companies’ websites or apps. The relief funds were then deposited onto recipients’ credit card or debit card in the form of points—which were separated from normal credit card points—within two days after applying. In September 2021, during the second round of universal relief payments known as the “COVID-19 Win-Win National Relief Fund,” 90% of expected recipients received their payments within 12 days.

Restricting spending to boost spending

However, paying recipients in credit card points meant restricting their access to cash. While low-income households received the relief fund in cash during the first round of COVID-19 relief, they had to apply for the payment in the second round and could only choose among cashless methods which included credit cards and debit cards. To make matters worse, the policy placed constraints on where points could be used, in the name of encouraging consumption and growing the local economy. The points could only be used in designated places, and could not be used to pay for utility bills, repay a mortgage, nor for online shopping. They could not be transferred to others’ bank accounts or withdrawn as cash. Therefore, recipients had no choice but to use their relief funds in certain local restaurants, markets, or clothing stores, etc. If the points had not been used approximately 3-4 months after disbursement, then they were returned to the national treasury. All of these conditions were the outcome of the fact that the policy specifically aimed at boosting consumption.

Jeopardizing the welfare aim

These restrictions had significant repercussions on people in poverty, in two key ways. First, the relief fund failed to fulfill the right to social protection of vulnerable people at risk. As utility bills, telecommunication fees, and even health insurance fees could not be paid with the points, many were left unable to pay for the things they needed to pay for, while much-needed funds remained effectively stranded on the card. What use is a card meant only for restaurants and shops when one is in arrears on utility bills, health insurance fees, and at risk of electricity supply and health insurance benefits being cut off? Those who needed cash immediately sometimes handed their credit cards to other people to use, and then requested payment back in cash below the value. It was also reported that a number of people bought products at stores where relief fund points could be used, and then sold the products at a lower price on the second-hand online market to obtain cash. Although the government warned that it would crack down on such “illegal transactions,” the demand for cash could not be controlled.

Second, the right to housing of vulnerable populations was not sufficiently protected through this scheme. Homeless persons, who needed the most help, were severely affected because the cashless relief funds could not function as a payment method for monthly rent. Homeless people and slice-room dwellers were the group which most strongly agreed that “the COVID-19 relief fund should be distributed in cash” in a survey. Further, given that low-income people spent a higher proportion of their income on rent than those from other social classes, the fact that the relief funds could not be used on rent also significantly affected low-income households. A number of temporary or informal workers who lost their jobs due to the pandemic were on the verge of being pushed into poorer conditions because they could not afford their rent. The relief program could not help these groups cover some of their most urgent expenditures—housing costs—at all.

Boosting consumption can be expected as an indirect effect of government relief funds, but it must not be adopted as a specific goal of such programs. Attempting to achieve this consumption-oriented goal through the relief payments resulted in the scheme’s design imposing limitations on the use of funds, thereby undermining the scheme’s ability to help those in the most extreme need. As the government set boosting consumption as one of the aims of the program and seemingly prioritized it over the welfare aim, the delivery of the payments was devised in an inappropriate way that did not take the most vulnerable into account.

Killing two birds with one stone?

The Korea Development Institute (KDI) found that only about 30% of the first emergency relief funds led to an increase in consumption, while the remaining 70% led to household debt repayment or savings. In the end, it seemed that the cashless relief stipend did not successfully increase consumption, all while it caused the weakening of its social security function.
Such schemes aimed at “killing two birds with one stone” were doomed to fail from the beginning because these two goals come into tension with one another in the program’s design. The consumption aim is likely to harm the welfare aim through pushing for cashless, controlled, and restricted use. The sole purpose of emergency relief funds in a crisis should be to provide assistance for the most vulnerable. Such schemes should be delivered in a way that will best fulfill this aim, they should be focused on providing a safety net, and should be designed from the perspective of right-holders, and not of consumers.

April 19, 2022. Bo Eun Kwon, LLM program, NYU School of Law whose interests include international human rights law, economic and social rights, and digital governance. She has worked at the National Human Rights Commission of Korea.

Indigenous Women in Guyana Commit to Protecting their Lands from Destructive Mining, Deforestation

CLIMATE & ENVIRONMENT

Indigenous Women in Guyana Commit to Protecting their Lands from Destructive Mining, Deforestation

At the end of an indigenous women’s empowerment conference in the Parikwarnau Village in Guyana from April 4-5, 2022, delegates pledged to take action and demanded the same from the government of Guyana.

The eighty-six women attending the conference committed to advocating for legal recognition of traditional Wapichan lands, continuing to sustainably care for those lands, protecting waters and forests from the effects of mining, combating climate change, and addressing pressing social issues. These commitments and demands were set out in a Call to Action by the female protectors of the Wapichan Wiizi.

The conference was hosted by the women’s arm of the South Rupununi District Council (SRDC) (and Global Justice Clinic partner), the Wapichan Women’s Movement (WWM). Led by Immaculata Casimero and Faye Fredericks, key topics at the conference included indigenous women’s protections under international law, particularly CEDAW, and their role in the fight for climate justice. For example, indigenous women are particularly vulnerable to the food insecurity that has resulted from climate change, as the family’s primary food providers. Women learned together about concepts like “nature-based solutions”—the idea that focusing on protecting nature and biodiversity through sustainable actions like allowing forests to regrow is a way of combating climate change. “Indigenous peoples are the original inventors of ‘nature-based solutions,’” Immaculata Casimero said at the end of the conference. “To combat deforestation, we have captured aerial images of impacted areas and plan to use them in advocacy efforts.”

Casimero and Fredericks reported feeling a palpable shift in the room after the conference; they are confident that indigenous women felt empowered by this experience and will return to their communities and share their knowledge with others. The women’s plans are captured by the concrete commitments and demands listed in the Call to Action, which the SRDC posted on Facebook.

This post was originally published on April 18, 2022.

Haiti Land Grab Violates Women’s Rights and Deepens Climate Crisis, Say Rights Groups

HUMAN RIGHTS MOVEMENT

Haiti Land Grab Violates Women’s Rights and Deepens Climate Crisis, Say Rights Groups

NYU Global Justice Clinic and Solidarite Fanm Ayisyèn submission to the U.N. Special Rapporteur on Violence Against Women underscores consequences of violent land grab against women in Savane Diane, Haiti

Español | Kreyòl

A violent land grab that displaced women farmers in Savane Diane, Haiti, constituted gender-based violence and has aggravated climate vulnerability, NYU’s Global Justice Clinic and Solidarite Fanm Ayisyèn (SOFA) told the UN Special Rapporteur on Violence Against Women in a submission lodged late last week. The Savane Diane land grab, which expropriated land used by SOFA to teach women ecologically sustainable farming techniques, is just one of many in recent months. Land grabs in Haiti are on the rise, while the Haitian judiciary has failed to respond.

“We are asking for the Special Rapporteur’s attention because we have been unable to secure justice in Haiti,” said Sharma Aurelien, SOFA’s Executive Director. “This land helped women combat poverty and benefited all of society,” she continued.

In 2020, armed men violently forced SOFA members from land that the Haitian government had granted them exclusive rights to use, severely beating some. SOFA learned that an agro-industry company, Stevia Agro Industries S.A., was claiming title to the area to grow stevia for export. The Haitian government revoked SOFA’s rights to the land, without a court process, and, in early 2021, the late President Jovenel Moïse converted the land into an agro-industrial free trade zone by executive decree.

“The Minister of Agriculture set himself up as a judge, siding with Stevia Industries and allowing it to continue its activities while SOFA was ordered to suspend ours” said Marie Frantz Joachim, SOFA coordinating committee member.

The organizations’ submission underscores the compounding rights violations caused by the land grab. It is deepening poverty and food insecurity in the area, and women who have sought work with Stevia Industries have experienced sexual exploitation and wage theft. The grab also violates residents’ right to water in a context of deepening climate crisis: the land seized includes three State-protected water reservoirs.

“We lost our water reserves because they have now become the [company’s]. Meanwhile, we are experiencing a major water crisis,” said Esther Jolissaint, an affected SOFA member in Savane Diane.

Climate change, land grabbing, and violence against women are interconnected phenomena, say the organizations. Haiti is often named as one of the five countries most affected by the climate crisis. Land grabbing can both result from and contribute to climate vulnerability, as increasingly scarce agricultural land is converted to environmentally degrading monoculture agriculture or other industrial use. Women are particularly vulnerable.

“Rural women’s land rights and access to agricultural resources are essential to securing their human rights and supporting climate resilience,” said Sienna Merope-Synge, Co-Director of GJC’s Caribbean Climate Justice Initiative. “Land grabbing against women should be recognized as a form of gender-based violence,” she continued.

The joint submission emphasizes SOFA’s call for reparations and restitution for women affected by the land grab. It also highlights SOFA and Haitian social movements’ call for greater protections for peasant land rights, as rural communities in Haiti note an uptick in land grabbing. Greater international attention and condemnation is needed, the organizations say.  “We are calling for solidarity from others engaged in the global struggle to ensure respect for human rights,” concluded Aurelien.

This post was originally published as a press release on April 5, 2022.

This post reflects the statement of the Global Justice Clinic and not necessarily the views of NYU, NYU Law, or the Center for Human Rights and Global Justice.

Singapore’s “smart city” initiative: one step further in the surveillance, regulation and disciplining of those at the margins

TECHNOLOGY & HUMAN RIGHTS

Singapore’s “smart city” initiative: one step further in the surveillance, regulation and disciplining of those at the margins

Singapore’s smart city initiative creates an interconnected web of digital infrastructures which promises citizens safety, convenience, and efficiency. But the smart city is experienced differently by individuals at the margins, particularly migrant workers, who are experimented on at the forefront of technological innovation.

On February 23, 2022, we hosted the tenth event of the Transformer States Series on Digital Government and Human Rights, titled “Surveillance of the Poor in Singapore: Poverty in ‘Smart City’.” Christiaan van Veen and Victoria Adelmant spoke with Dr. Monamie Bhadra Haines about the deployment of surveillance technologies as part of Singapore’s “smart city” initiative. This blog outlines the key themes discussed during the conversation.

The smart city in the context of institutionalized racial hierarchy

Singapore has consistently been hailed as the world’s leading smart city. For a decade, the city-state has been covering its territory with ubiquitous sensors and integrated digital infrastructures with the aim, in the government’s words, of collecting information on “everyone, everything, everywhere, all the time.” But these smart city technologies are layered on top of pre-existing structures and inequalities, which mediate how these innovations are experienced.

One such structure is an explicit racial hierarchy. As an island nation with a long history of multi-ethnicity and migration, Singapore has witnessed significant migration from Southern China, the Malay Peninsula, India, and Bangladesh. Borrowing from the British model of race-based regulation, this multi-ethnicity is governed by the post-colonial state through the explicit adoption of four racial categories – Chinese, Malay, Indian and Others (or “CMIO” for short) – which are institutionalized within immigration policies, housing, education and employment. As a result, while migrant workers from South and Southeast Asia are the backbone of Singapore’s blue-collar labor market, they occupy the bottom tier of the racial hierarchy; are subject to stark precarity; and have become the “objects” of extensive surveillance by the state.

The promise of the smart city

Singapore’s smart city initiative is “sold” to the public through narratives of economic opportunities and job creation in the knowledge economy, improving environmental sustainability, and increasing efficiency and convenience. Through collecting and inter-connecting all kinds of “mundane” data – such as electricity patterns, data from increasingly-intrusive IoT products, and geo-location and mobility data – into centralized databases, smart cities are said to provide more safety and convenience. Singapore’s hyper-modern technologically-advanced society promises efficient and seamless public services, and the constant technology-driven surveillance and the loss of a few civil liberties are viewed by many as a small price to pay for such efficiency.

Further, the collection of large quantities of data from individuals is promised to enable citizens to be better connected with the government; while governments’ decisions, in turn, will be based upon the purportedly objective data from sensors and devices, thereby freeing decision-making from human fallibility and rendering it more neutral.

The realities: disparate impacts of smart city surveillance on migrant workers

However, smart cities are not merely economic or technological endeavors, but techno-social assemblages that create and impact different publics differently. As Monamie noted, specific imaginations and imagery of Singapore as a hyper-modern, interconnected, and efficient smart city can obscure certain types of racialized physical labor, such as the domestic labor of female Southeast-Asian migrant workers.

Migrant workers are uniquely impacted by increasing digitalization and datafication in Singapore. For years, these workers have been housed in dormitories with occupancy often exceeding capacity, located in the literal “margins” or outskirts of the city: migrant workers have long been physically kept separate from the rest of Singapore’s population within these dormitory complexes. They are stereotyped as violent or frequently inebriated, and the dormitories have for years been surveilled through digital technologies including security cameras, biometric sensors, and data from social media and transport services.

The pandemic highlighted and intensified the disproportionate surveillance of migrant workers within Singapore. Layered on top of the existing technological surveillance of migrants’ dormitories, a surveillance assemblage for COVID-19 contact tracing was created. Measures in the name of public health were deployed to carefully surveil these workers’ bodies and movements. Migrant workers became “objects” of technological experimentation as they were required to use a multitude of new mobile-based apps that integrated immigration data and work permit data with health data (such as body temperature and oximeter readings) and Covid-19 contact tracing data. The permissions required by these apps were also quite broad – including access to Bluetooth services and location data. All the data was stored in a centralized database.

Even though surveillant contact-tracing technologies were later rolled out across Singapore and normalized around the world, the important point here is that these systems were deployed exclusively on migrant workers first. Some apps, Monamie pointed out, were indeed only required by migrant workers, while citizens did not have to use them. This use of interconnected networks of surveillance technologies thus highlights the selective experimentation that underpins smart city initiatives. While smart city initiatives are, by their nature, premised on large-scale surveillance, we often see that policies, apps, and technologies are tried on individuals and communities with the least power first, before spilling out to the rest of the population. In Singapore, the objects of such experimentation are migrant workers who occupy “exceptional spaces” – of being needed to ensure the existence of certain labor markets, but also of needing to be disciplined and regulated. These technological initiatives, in subjecting specific groups at the margins to more surveillance than the rest of the population and requiring them to use more tech-based tools than others, serve to exacerbate the “othering” and isolation of migrant workers.

Forging eddies of resistance

While Monamie noted that “activism” is “still considered a dirty word in Singapore,” there have been some localized efforts to challenge some of the technologies within the smart city, in part due to the intensification of surveillance spurred by the pandemic. These efforts, and a rapidly-growing recognition of the disproportionate targeting and disparate impacts of such technologies, indicate that the smart city is also a site of contestation with growing resistance to its tech-based tools.

March 18, 2022. Ramya Chandrasekhar, LLM program at NYU School of Law whose research interests relate to data governance, critical infrastructure studies, and critical theory. She previously worked with technology policy organizations and at a reputed law firm in India.

Experimental automation in the UK immigration system

TECHNOLOGY & HUMAN RIGHTS

Experimental automation in the UK immigration system

The UK government is experimenting with automated immigration systems. The promised benefits of automation are inevitably attractive, but these experiments routinely expose people—including some of the most vulnerable—to unacceptable risks of harm.

In April 2019, The Guardian reported that couples accused of sham marriages were increasingly being subjected to invasive investigations by the Home Office, the UK government body responsible for immigration policy. Couples reported having their wedding ceremonies interrupted to be quizzed about their sex life, being told they were not in a genuine relationship because they were wearing pajamas in bed, and being present while their intimate photos were shared between officials.

The official tactics reported are worrying enough, but it has since come to light through the efforts of a legal charity (the Public Law Project) and investigative journalists that an automated system is largely determining who gets investigated in the first place. An algorithm, hidden from public view, is sorting couples into “pass” and “fail” categories, based on eight unknown criteria.
Couples who “fail” this covert algorithmic test are subjected to intrusive investigations. They must attend an interview and hand over extensive evidence about their relationship, a process which has been described as “insulting” and “grueling.” These investigations can also prevent couples from getting married altogether. If the Home Office decides that a couple has failed to “comply” with an investigation—even if they are in a genuine relationship—the couple is denied a marriage certificate and forced to start the process all over again. One couple was reportedly ruled non-compliant for failing to provide six months of bank statements for an account that had only been open for four months. This makes it difficult for people to plan their weddings and their lives. And the investigation can lead to other immigration enforcement actions, such as visa cancellation, detention, and deportation. In one case, a sham marriage dawn raid led to a man being detained for four months, until the Home Office finally accepted that his relationship was genuine.

We know little about how this automated system operates in practice or its effectiveness in detecting sham marriages. The Home Office refuses to disclose or otherwise explain the eight criteria at the center of the system. There is a real risk that the system is racially discriminatory, however. The criteria were derived from historical data, which may well be skewed against certain nationalities. The Home Office’s own analysis shows that some nationalities, including Bulgarian, Greek, Romanian and Albanian people, receive “fail” ratings more frequently than others.

The sham marriages algorithm is, in many respects, a typical case of the deployment of automation in the UK immigration system. It is not difficult to understand why officials are seeking to automate immigration decision-making. Administering immigration policy is a tough job. Officials are often inexperienced and under pressure to process large volumes of decisions. Each decision will have profound effects for those subjected to it. This is not helped by the dense complexity of, and frequent changes in, immigration law and policy, which can bamboozle even the most hardened administrative lawyer. All of this, of course, takes place in an environment where migration remains one of the most vexed issues on the political agenda. Automation’s promised benefits of greater efficiency, lower costs, and increased consistency are, from the government’s perspective, inevitably attractive.

But in reality, a familiar pattern of risky experimentation and failure is already emerging. It begins with the Home Office deploying a novel automated system with the goal of cheaper, quicker, and more accurate decision-making. There is often little evidence to support the system’s effectiveness in delivering those goals and scant consideration of the risks of harm. Such systems are generally intended to benefit the government or the general, non-migrant population, rather than the people subject to them. When the system goes wrong and harms individuals, the Home Office fails to take adequate steps to address those harms. The justice system—with its principles and procedures developed in response to more traditional forms of public administration—is left to muddle through in trying to provide some form of redress. That redress, even where best efforts are made, is often unsatisfactory.

This is the story we seek to tell in our new book, Experiments in Automating Immigration Systems, through an exploration of three automated immigration systems in the UK: a voice recognition system used to detect fraud in English language testing; an algorithm for identifying “risky” visa applications; and automated decision-making in the process for EU citizens to apply to remain in the UK after Brexit. It is, at its core, a story of risky bureaucratic experimentation that routinely exposes people, including some of the most vulnerable, to unacceptable risks of harm. For example, some of the students caught up in the English language testing scandal were detained and deported, while others had to abandon their studies and fight for years through the courts to prove their innocence. While we focus on the UK experience, this story will no doubt be increasingly familiar in many countries around the world.

It is important to remember, however, that this story is just beginning. While it would be naïve to think that the tensions in public administration can ever be wholly overcome, the government must strive to reap the benefits of automation for all of society, in a way that is sensitive to and mitigates the attendant risks of injustice. That work is, of course, best led by the government itself.

But the collective work of journalists, charities, NGOs, lawyers, researchers, and others will continue to play a crucial role in ensuring, as far as possible, that automated administration is just and fair.

March 14, 2022. Joe Tomlinson and Jack Maxwell.
Dr. Joe Tomlinson is a Senior Lecturer in Public Law at the University of York.
Jack Maxwell is a barrister at the Victorian Bar.

GJC Partners in Haiti and Guyana Testify Before IACHR on Detriment of Extractive Industry in the Caribbean

CLIMATE AND ENVIRONMENT

GJC Partners in Haiti and Guyana Testify Before IACHR on Detriment of Extractive Industry in the Caribbean

On October 26, 2021, advocates and experts from five Caribbean countries, Haiti, Jamaica, Guyana, Trinidad and Tobago, and The Bahamas, presented on the impact of extractive industry activities on human rights and climate change in the Caribbean in a hearing before the Inter-American Commission on Human Rights (IACHR). Samuel Nesner, a founding member of Kolektif Jistis Min and long-time partner of NYU Law’s Global Justice Clinic, presented on the serious harm of extraction and land grabs in Haiti to the human rights of rural communities. Another Global Justice Clinic partner and member of the South Rupununi District Council, Immaculata Casimero, presented on the impact of extractive industries on indigenous women.

Samuel Nesner highlighted that for centuries land in Haiti has been expropriated and transferred to the elite with rural communities facing the brunt of the harm. Repeated expropriation of land, also known as land grabbing, has forced farmers and their families from their land, many times under threat of violence and almost always without adequate compensation for the loss of their land and sole source of income. Many believe that the land grabs relate to the content of the soil: much of the area that has been taken from farmers in the rural North is known for its mineral resources. Between 2006 and 2013, the Haitian government granted four U.S. and Canadian companies more than 50 mining permits. Many were granted in flagrant violation of Haitian law, without consultation of the dozen communities who live on the land under permit, and without first conducting an adequate environmental and social impact assessment. Residents of these communities have reported that company representatives entered their land without permission, taking samples and digging holes in their farmland. 

Immaculata Casimero noted that extractive industries pose a particular danger to indigenous peoples, who face longstanding land tenure insecurity. In Immaculata’s own Wapichan territory, many traditional indigenous lands are left unrecognized by the Guyanese government—and therefore vulnerable to big businesses looking to obtain agricultural leases on their land and extractive industries seeking to mine gold from their land. Immaculata emphasized that allowing mining on indigenous land harms their cultural heritage and way of life, and that women are especially affected as the main conveyors and protectors of this cultural heritage. Mining not only damages cultural heritage, but also the community’s health: it has led to mercury poisoning by contaminating crucial headwaters and has compounded the effects of climate change, with flooding, lower crop yields, and higher food insecurity. The presence of new miners has also raised social concerns, such as an increase in gender-based violence and prostitution.

Following the speakers’ presentations, IACHR Commissioners commended the speakers on their efforts to address the urgent issue of the impact of extractive industries in the Caribbean. IACHR Commissioner Margaret May Macauley (Jamaica) expressed her concern about the “complete lack of prior information and prior consultation before the majority, if not all, of these extractive industries commence. That is, the governments of these States enter into contracts with the corporations without prior information to the peoples who reside in the lands, on the lands, or by the seas, and they do not engage in prior consultation with them… The persons are left completely unprotected.” This certainly rings true in Haiti and Guyana, where foreign companies have repeatedly profited off the land of Haitian farmers and the Wapichan people without prior consultation about the use of their land.

February 14, 2022.