Emerging Scholars Conference

STUDENTS

Emerging Scholars Conference

Since 2003, the conference has become a cornerstone of the NYU human rights experience, fostering a culture of appreciation for high-quality, engaged scholarship among the law school’s human rights community. Students present original papers and receive expert feedback in a constructive, collaborative setting.

The Conference is an opportunity for all NYU School of Law students to submit and present papers on international law and human rights issues and gain valuable feedback on their work.

  • Submissions will be reviewed and select papers are accepted into the conference’s program.
  • Accepted papers are shared with an interdisciplinary group of scholars and practitioners for feedback.
  • Presenters and commentators will engage in discussions around the paper at the event.
  • An outstanding paper receives the Global Justice Emerging Scholar Essay Award which entails an award certificate and a commitment from the organizing team to support the publication of their paper.

Papers presented at this conference have gone on to be published in quality journals, including the Canadian Yearbook of International Law, the Journal of International Criminal Justice, and the NYU Journal of International Law and Politics.

The Center hosts the Emerging Scholars Conference each Spring in partnership with the Institute for International Law and Justice.

All currently enrolled full-time students at NYU Law are eligible to submit a paper.

Students associated with the Center for Human Rights and Global Justice as Human Rights Scholar or Fellows through the International Law and Human Rights Fellowship (2023 or 2024) are highly encouraged to submit a paper for presentation.

The submission cycle for 2024 conference is now closed. Recruitment for the following conference takes place in February of each Spring semester.

The following documents will be required to be submitted via an application form:

  • Short bio
  • Abstract
  • Final Paper Draft

Poor Enough for the Algorithm? Exploring Jordan’s Poverty Targeting System

TECHNOLOGY AND HUMAN RIGHTS

Poor Enough for the Algorithm? Exploring Jordan’s Poverty Targeting System

The Jordanian government is using an algorithm to rank social protection applicants from least poor to poorest, as part of a poverty alleviation program. While helpful to those individuals who receive aid, the system is excluding beneficiaries in need, as it is failing to accurately reflect the complex realities of poverty. It uses an outdated poverty measure, weights imperfect indicators—such as utility consumption—and relies on a static view of socioeconomic status.

On November 28, 2023, the Digital Welfare State and Human Rights project hosted the sixteenth episode in the Transformer States conversation series on Digital Government and Human Rights. Victoria Adelmant and Katelyn Cioffi interviewed Hiba Zayadin, a senior researcher in the Middle East and North Africa division at Human Rights Watch (HRW), about a report published by HRW on the Jordanian government’s use of an algorithmic system to rank applicants for a welfare program based on their poverty level, using data like electricity usage and car ownership. This blog highlights key issues related to the system’s inability to reflect the complexities of poverty and its algorithmic exclusion of individuals in need.

The context behind Jordan’s poverty targeting program 

Poverty targeting’ is generally understood to mean directing social program benefits towards those most in need, with the aim of efficiently using limited government resources and improving living conditions for the poorest individuals. This approach entails the collection of wide-ranging information about socioeconomic circumstances, often through in-depth surveys and interviews, to enable means testing or proxy means testing. Some governments have adopted an approach in which beneficiaries are ‘ranked’ from richest to poorest, and target aid only to those falling below a certain threshold. The World Bank has long advocated for poverty targeting in social assistance. For example, since 2003, the World Bank has supported Brazil’s Bolsa Família program, which is a program targeted at the poorest 40% of the population

Increasingly, the World Bank has turned to new technologies to seek to improve the accuracy of poverty targeting programs. It has provided funding to many countries for data-driven, algorithm-enabled solutions to enhance targeting. Similar programs have been implemented in countries including Jordan, Mauritania, Palestine, Morocco, Iraq, Tunis, Jordan, Egypt, and Lebanon.

Launched in 2019 with World Bank support, Jordan’s Takaful program, an automated cash transfer program, provides monthly support to families (roughly US $56 to $192) to mitigate poverty. Managed by the National Aid Fund, the program targets the more than 24% of Jordan’s population that falls under the poverty line. The Takaful program has been especially welcome in Jordan, in light of rising living costs. However, policy choices underpinning this program have excluded many individuals who are in need: eligibility restrictions limit access solely to Jordanian nationals, such that the program does not cover registered Syrian refugees, Palestinians without Jordanian passports, migrant workers, and the non-Jordanian families of Jordanian women—since Jordanian women cannot pass on citizenship to their children. Initial phases of the program entailed broader eligibility, but criteria were tightened in subsequent iterations.

Mismatch between the Takaful program’s indicators and the reality of people’s lives

In addition, further exclusions have arisen because of the operation of the algorithmic system used in the program. When a person applies to Takaful, the system first determines eligibility by checking whether an applicant is a citizen and whether they are under the poverty line. It subsequently employs an algorithm, relying on 57 socioeconomic indicators, to rank people from least poor to poorest. The National Aid Fund uses existing databases as well as applicants’ answers to a questionnaire – that they must fill out online. Indicators include household size, geographic location, utilities consumption, ownership of businesses, and car ownership. It is unclear how these indicators are weighted, but the National Aid Fund has admitted that some indicators will lead to the automatic exclusion of applicants from the Takaful program. Applicants who own a car that is less than five years old or a business valued at over 3000 Jordanian Dinars, for instance, are automatically excluded. 

In its recent report, HRW highlights a number of shortcomings of the algorithmic system deployed in the Takaful program, critiquing its inability to reflect the complex and dynamic nature of poverty. The system, HRW argues, uses an outdated poverty measure, and embeds many problematic assumptions. For example, the algorithm gives some weight to whether an applicant owns a car. However, there are cars in people’s names that they do not actually own; some people own cars that broke down long ago, but they cannot afford to repair them. Additionally, the algorithm assumes that higher electricity and water consumption indicates that a family is less vulnerable. However, poorer households in Jordan in many cases actually have higher consumption—a 2020 survey showed that almost 75% of low- to middle-income households lived in apartments with poor thermal insulation.

Furthermore, this algorithmic system is designed on the basis of a single assessment of socioeconomic circumstances at a fixed point in time. But poverty is not static; people’s lives change and their level of need fluctuates. Another challenge is the unpredictability of aid: in this conversation with CHRGJ’s Digital Welfare State and Human Rights team, Hiba shared the story of a new mother who had been suddenly and unexpectedly cut off from the Takaful program, precisely when she was most in need.

At a broader level, introducing an algorithmic system such as this can also exacerbate information asymmetries. HRW’s report highlights issues concerning opacity in algorithmic decision-making—both for government officials themselves and those subject to the algorithm’s decisions—such that it is more difficult to understand how decisions are being made within this system.

Recommendations to improve the Takaful program

Given these wide-ranging implications, HRW’s primary recommendation is to move away from poverty targeting algorithms and toward universal social protection, which could cost under 1% of the country’s GDP. This could be funded through existing resources, tackling tax avoidance, implementing progressive taxes, and leveraging the influence of the World Bank to guide governments towards sustainable solutions. 

When asked during this conversation whether the algorithm used in the Takaful program could be improved, Hiba noted that a technically perfect algorithm executing a flawed policy will still lead to negative outcomes. She argued that it is the policy itself – the attempt to rank people from least poor to poorest – that is prone to exclusion errors, and warns that technology may be shiny, promising to make targeting accurate, effective, and efficient, but that it can also be a distraction from the policy issues at hand.

Thus, instead of flattening economic realities and leading to the exclusion of people who are, in reality, in immense need, Hiba recommended that support be provided inclusively and universally—to everyone during vulnerable stages of life, regardless of their income and their wealth. Therefore, rather than focusing on using technology that will enable ever-more precise targeting, Jordan should focus on embracing solutions that allow for more universal social protection.

Rebecca Kahn, JD program, NYU School of Law;  and  Human Rights Scholar at the Digital Welfare State & Human Rights project. Her research interests relate to responsible AI governance, digital rights, and consumer protection. She previously worked in the U.S. House and Senate as a legislative staffer.

Co-creating a Shared Human Rights Agenda for AI Regulation and the Digital Welfare State

TECHNOLOGY AND HUMAN RIGHTS

Co-creating a Shared Human Rights Agenda for AI Regulation and the Digital Welfare State

On September 26, 2023, the Digital Welfare State and Human Rights Project at the Center for Human Rights and Global Justice at NYU Law and Amnesty Tech’s Algorithmic Accountability Lab (AAL) brought together 50 participants from civil society organizations across the globe to discuss the use and regulation of artificial intelligence in the public sector, within a collaborative online strategy session entitled ‘Co-Creating a Shared Human Rights Agenda for AI and the Digital Welfare State.’ Participants spanned diverse geographies and contexts—from Nigeria to Chile, and from Pakistan to Brazil—and included organizations working across a broad spectrum of human rights issues such as privacy, social security, education, and health. Through a series of lightning talks and breakout room discussions, the session surfaced shared concerns regarding the use of AI in public sector contexts, key gaps in existing discussions surrounding AI regulation, and potential joint advocacy opportunities.

Global discussions on the regulation of artificial intelligence (AI) have, in many contexts, thus far been preoccupied with whether to place meaningful constraints on the development, sale, and use of AI by private technology companies. Less attention has been paid to the need to place similar constraints on governments’ use of AI. But governments’ enthusiastic adoption of AI across public sector programs and critical public services has been accelerating apace around the world. AI-based systems are consistently tested in spheres where some of the most marginalized and low-income groups are unable to opt out – for instance, machine learning and other technologies are used to detect welfare benefit fraud, to assess vulnerability and determine eligibility for social benefits like housing, and to monitor people on the move. All too often, however, this technological experimentation results in discrimination, restriction of access to key services, privacy violations, and many other human rights harms. As governments eagerly build “digital welfare states,” incorporating AI into critical public services, the scale and severity of potential implications demands that meaningful constraints be placed on these developments. 

In the past few years, a wide array of regulatory and policy initiatives aimed at regulating the development and use of AI have been introduced – in Brazil, China, Canada, the EU, and the African Commission on Human and Peoples’ Rights, among many other countries and policy fora. However, what is emerging from these initiatives is an uneven patchwork of approaches to AI regulation, with concerning gaps and omissions when it comes to public sector applications of AI. Some of the world’s largest economies – where many powerful technology companies are based – are embarking on new regulatory initiatives with impacts far beyond their territorial confines, while many of the groups likely to be most affected have not been given sufficient opportunities to participate in these processes.

Despite these shortcomings, ongoing efforts to craft regulatory regimes do offer a crucial and urgent entry point for civil society organizations to seek to highlight critical gaps, to foster greater participation, and to contribute to shaping future deployments of AI in these important sectors.

In hosting this collaborative event on AI regulation and the digital welfare state, the AAL and the Center sought to build an inclusive space for civil society groups from across regions and sectors to forge new connections, share lessons, and collectively strategize. We sought to expand mobilization and build solidarity by convening individuals from dozens of countries, who work across a wide range of fields – including “digital rights” organizations, but also bringing in human rights and social justice groups who have not previously worked on issues relating to new technologies. Our aim was to brainstorm how actors across the human rights ecosystem can, in practice, help to elevate more voices into ongoing discussions about AI regulation.

Key issues for AI regulation in the digital welfare state

In breakout sessions, participants emphasized the urgent need to address serious harms that are already resulting from governments’ AI uses, particularly in contexts such as border control, policing, the judicial system, healthcare, and social protection. The public narrative – and accelerated impetus for regulation – has been dominated by discussion of existential threats AI may pose in the future, rather than the severe and widespread threats that are already seen in almost every area of public services. In Serbia, the roll-out of Social Cards in the welfare system has excluded thousands of the most marginalized from accessing their social protection entitlements; in Brazil, the deployment of facial recognition in public schools has subjected young children to discriminatory biases and serious privacy risks. Deployments of AI across public services are consistently entrenching inequalities and exacerbating intersecting discrimination – and participants noted that governments’ increasing interest in generative AI, which has the potential to encode harmful racial bias and stereotypes, will likely only intensify these risks.

Participants also noted that it is likely that AI will continue to impact groups that may defy traditional categorizations – including, for instance, those who speak minority languages. Indeed, a key theme across discussions was the insufficient attention paid in regulatory debates to AI’s impacts on culture and language. Given that systems are generally trained only in dominant languages, breakout discussions surfaced concerns about the potential erasure of traditional languages and loss of cultural nuance.

As advocates work not only to remedy some of these existing harms, but also to anticipate the impacts of the next iterations of AI, many expressed concern about the dominant role that the private sector plays in governments’ roll-outs of AI systems, as well as in discussions surrounding regulation. Where tech companies – who are often protected by powerful lobby groups, commercial confidentiality, and intellectual property regimes – are selling combinations of software, hardware, and technical guidance to governments, this can pose significant transparency challenges. It can be difficult for civil society organizations and affected individuals to understand who is providing these systems, as well as to understand how decisions are made. In the welfare context, for example, beneficiaries are often unaware of whether and how AI systems are making highly consequential decisions about their entitlements. Participants noted that human rights actors need the capacity and resources to move beyond traditional human rights work, to engage with processes such as procurement, standard-setting, and auditing, and to address issues related to intellectual property regimes and proliferating public-private partnerships underlying governments’ uses of AI.

These issues are compounded by the fact that, in many instances, AI-based systems are designed and built in countries such as the US and then marketed and sold to governments around the world for use across critical public services. Often, these systems are not designed with sensitivity to local contexts, cultures, and languages, nor with cognizance of how the technology will interface with the political, social, and economic landscape where it is deployed. In addition, civil society organizations face additional barriers when seeking transparency and access to information from foreign companies. As AI regulation efforts advance, a failure to consider potential extraterritorial harms will leave a significant accountability gap and risk deepening global inequalities. Many participants therefore noted both the importance of ensuring that regulation in countries where tech companies are based includes diverse voices and addresses extraterritorial impacts, but also to ensure that Global North models of regulation, which may not be fit for purpose, are not automatically “exported.”

A way forward

The event ended with a strategizing session that revealed the diverse strengths of the human rights movement and multiple areas for future work. Several specific and urgent calls to action emerged from these discussions.

First, given the disproportionate impacts of governments’ AI deployments on marginalized communities, a key theme was the need for broader participation in discussions on emerging AI regulation. This includes specially protected groups such as indigenous peoples, minoritized ethnic and racial groups, immigrant communities, people with disabilities, women’s rights activists, children, and LGBTQ+ groups, to name just a few. Without learning from and elevating the perspectives and experiences of these groups, regulatory initiatives will fail to address the full scope of the realities of AI. We must therefore develop participatory methodologies that bring the voices of communities into key policy spaces. More routes to meaningful consultation would lead to greater power and autonomy for previously marginalized voices to shape a more human rights-centric agenda for AI regulation. 

Second, the unique impacts that public sector use of AI can have on human rights, especially for marginalized groups, demands a comprehensive approach to AI regulation that takes careful account of specific sectors. Regulatory regimes that fail to include meaningful sector-specific safeguards for areas such as health, education, and social security will fail to address the full range of AI related harms. Participants noted that existing tools and mechanisms can provide a starting point – such as consultation and testing requirements, specific prohibitions on certain kinds of systems, requirements surrounding proportionality, mandatory human rights impact assessments, transparency requirements, periodic evaluations, and supervision mechanisms.

Finally, there was a shared desire to build stronger solidarity across a wider range of actors, and a call to action for more effective collaborations. Participants from around the world were keen to share resources, partner on specific advocacy goals, and exchange lessons learned. Since participants focus on many diverse issues, and adopt different approaches to achieve better human rights outcomes, collaboration will allow us to draw on a much deeper pool of collective knowledge, methodologies, and networks. It will be especially critical to bridge silos between those who identify more as “digital rights” organizations and groups working on issues such as healthcare, or migrants’ rights, or on the rights of people with disabilities. Elevating the work of grassroots groups, and improving diversity and representation among those empowered to enter spaces where key decisions around AI regulation are made, should also be central in movement-building. 

There is also an urgent need for more exchange not only across the human rights ecosystem, but also with actors from other disciplines who bring different forms of technical expertise, such as engineers and public interest technologists. Given the barriers to entry to regulatory spaces – including the resources, long-term commitment, and technical vocabularies imposed – effective coalition-building and information sharing could help to lessen these burdens.

While this event brought together a fantastic and energetic group of advocates from dozens of countries, these takeaways reflect the views of only a small subset of the relevant stakeholders in these debates. We ended the session hopeful, but with the recognition that there is a great deal more work needed to allow for the full participation of affected communities from around the world. Moving forward, we aim to continue to create spaces for varied groups to self-organize, continue the dialogue, and share information. We will help foster collaborations and concretely support organizations in building new partnerships across sectors and geographies, and hope to continue to co-create a shared human rights agenda for AI regulation for the digital welfare state.

As we continue this work and seek to support efforts and build collaborations, we would love to hear from you – please get in touch if you are interested in joining these efforts.

November 14, 2023. Digital Welfare State and Human Rights Project at NYU Law Center for Human Rights and Global Justice, and Amnesty Tech’s Algorithmic Accountability Lab. 

What I Should Have Said to Fernando Botero

HUMAN RIGHTS MOVEMENT

What I Should Have Said to Fernando Botero

Your art is a provocation to viewers to ask: what is our role in safeguarding human rights? A reflection on meeting Colombian artist Fernando Botero. 

Image from Slideshow: The Botero Exhibit at Berkeley Law

I was privileged to have met world-famous Colombian artist, Fernando Botero, who died last month [September 2023] at age 91, when he visited the University of California, Berkeley in 2007. I teach human rights at the law school, and the artists came to campus for the exhibit of his 2005 Abu Ghraib series. The canvasses and sketches depict the horrors of Iraqi prisoner abuse by US soldiers, based on leaked photographs taken by service members at the Abu Ghraib prison facility. 

Overwhelmed by the paintings and awe-stuck by the artist who created them, I fumbled my few seconds with Mr. Botero. My memory is that I offered an anodyne appreciation of his work. If I could speak with him now, here is what I would say:

Mr. Botero, every day I enter the law school I try to keep in mind that the job of law professors is to train the next generation of lawyers to embody the highest values of the profession. It is true that we teach law students how to analyze the law, how to evaluate the strength of arguments, and how to weigh the equities in any given case. But law is not a set of rules that lawyers discover or inherit. Law is made through human intervention, in the form of legislation, interpretation by lawyers, as well as judicial decisions. You made vivid the power that legal professionals have to strengthen or to destroy the rule of law fabric that sustains humanity.

Your art is a provocation to viewers to ask: what is our role in safeguarding human rights?

Government lawyers drafted the rules of interrogating prisoners captured in the so-called War on Terror, setting the background norms for the torture of prisoners perpetrated  by guards and recorded on film as trophy shots. And lawyers created the rules for the treatment of so-called enemy combatants the United States held at Guantanamo Bay. I interviewed dozens of former detainees, men never charged with a crime, who endured years of mistreatment proscribed by US government lawyers in violation of international law. Government lawyers and politicians led the public to believe that harsh treatment, even torture, of suspected terrorists was necessary to keep us safe. Your art asks us to confront this bargain and to reconsider what we become as a nation, if we accept that premise, and you offer us a way forward.

You said at the time of the exhibit that your outrage that the United State, which has stood for democracy and rule of law, would commit such abuse motivated you to paint the series. Your Abu Ghraib collection conveys the suffering of Iraqi prisoners. Yet through your iconic style of voluminous forms, you also render the victims literally larger than life and give their bodies a weight that suggests a hyper-permanence. Their humanity outlives the outrages inflicted on them by US soldiers. Humanity will endure in spite of depredations, but whether ruptures in rule of law are mended by justice is up to us. And I think this is what you meant when you said about these works that: “Art is a permanent accusation.” 

Thanks to your permanent gift of the series to the university, I can view a few of the canvasses on display at our law school. Viewers must investigate the causes of US descent to systematic torture and the path to correct the injustice. The paintings accuse the audience of the dangers of believing that we must trade human rights for security; that it is acceptable to strip individuals of dignity simply by their being called a terrorist by a powerful state. The paintings accuse lawyers of their role in justifying rules that strip individuals of fundamental due process protections against arbitrary arrest, imprisonment, and torture.

Today, we find ourselves in the midst of another shocking rollback of fundamental rights and inversion of the rule of law, this time closer to home. The Supreme Court’s overturning of Roe v. Wade ushers in an era in which forced pregnancy, a form of torture under international law, is legal in the United States. There is a dangerous throughline from Abu Ghraib to the Dobbs decision: when we dehumanize one category of persons and legalize control over their bodies through direct or indirect violence, we make it easier to apply the same logic to an ever-expanding menu of targets. 

It is more than two decades after 9/11 and we as a society have not yet answered your accusation, Mr. Botero, to our detriment. Yet progressive lawyers and students continue to name torture and fight injustice when it is unpopular to do so. Justice remains a work in progress, which is why we need compelling art, like yours, to continue to challenge us to action.

October 4, 2023. Laurel E. Fletcher, Visiting Scholar (Fall 2023).
Laurel E. Fletcher is Chancellor’s Clinical Professor of Law at UC Berkeley, School of Law where she co-directs the International Human Rights Law Clinic and the Miller Institute for Global Challenges and the Law.

This post reflects the opinions of the author and not necessarily the views of NYU, NYU Law or the Center for Human Rights and Global Justice. 

Regulating Artificial Intelligence in Brazil

TECHNOLOGY & HUMAN RIGHTS

Regulating Artificial Intelligence in Brazil

On May 25, 2023, the Center for Human Rights and Global Justice’s Technology & Human Rights team hosted an event entitled Regulating Artificial Intelligence: The Brazilian Approach, in the fourteenth episode of the “Transformer States” interview series on digital government and human rights. This in-depth conversation with Professor Mariana Valente, a member of the Commission of Jurists created by the Brazilian Senate to work on a draft bill to regulate artificial intelligence, raised timely questions about the specificities of ongoing regulatory efforts in Brazil. These developments in Brazil may have significant global implications, potentially inspiring other more creative, rights-based, and socio-economically grounded regulation of emerging technologies in the Global South.

In recent years, numerous initiatives to regulate and govern Artificial Intelligence (AI) systems have arisen in Brazil. First, there was the Brazilian Strategy for Artificial Intelligence (EBIA), launched in 2021. Second, legislation known as Bill 21/20, which sought to specifically regulate AI, was approved by the House of Representatives in 2021. And in 2022, a Commission of Jurists was appointed by the Senate to draft a substitute bill on AI. This latter initiative holds significant promise. While the EBIA and Bill 21/20 were heavily criticized for the limited value given to public input in comparison to the available participatory and multi-stakeholder mechanisms, the Commission of Jurists took specific precautions to be more open to public input. Their proposed alternative draft legislation, which is grounded in Brazil’s socio-economic realities and legal tradition, may inspire further legal regulation of AI, especially for the Global South, considering Brazil’s position in other discussions related to internet and technology governance.

Bill 21/20 was the first bill directed specifically at AI. But this was a very minimal bill; it effectively established that regulating AI should be the exception. It was also based on a decentralized model, meaning that each economic sector would regulate its own applications of AI: for example, the federal agency dedicated to regulating the healthcare sector would regulate AI applications in that sector. There were no specific obligations or sanctions for the companies developing or employing AI, and there were some guidelines for the government on how it should promote the development of AI. Overall, the bill was very friendly to the private sector’s preference for the most minimal regulation possible. The bill was quickly approved in the House of Representatives, without public hearings or much public attention.

It is important to note that this bill does not exist in isolation. There is other legislation that applies to AI in the country, such as consumer law and data protection law, as well as the Marco Civil da Internet (Brazilian Civil Rights Framework for the Internet). These existing laws have been leveraged by civil society to protect people from AI harms. For example, Instituto Brasileiro de Defesa do Consumidor (IDEC), a consumer rights organization, successfully brought a public civil action using consumer protection legislation against Via Quatro, a private company responsible for the subway line 4-Yellow of Sao Paulo. The company was fined R$500,000 for collecting and processing individuals’ biometric data for advertising purposes without informed consent.

But, given that Bill 21/20 sought to specifically address the regulation of AI, academics and NGOs raised concerns that it would reduce the legal protections afforded in Brazil: it “gravely undermines the exercise of fundamental rights such as data protection, freedom of expression and equality” and “fails to address the risks of AI, while at the same time facilitating a laissez-faire approach for the public and private sectors to develop, commercialize and operate systems that are far from trustworthy and human-centric (…) Brazil risks becoming a playground for irresponsible agents to attempt against rights and freedoms without fearing for liability for their acts.”

As a result, the Senate decided that instead of voting on Bill 21/20, they would create a Commission of Jurists to propose a new bill.

The Commission of Jurists and the new bill

The Commission of Jurists was established in April 2022 and delivered its final report in December 2022. Even if the establishment of the Commission was considered a positive development, it was not exempt from criticism from civil society, for the lack of racial and regional diversity of the Commission’s membership, as well as the need for different areas of knowledge to contribute to the debate. This criticism comes from a reflection of the socio-economic realities of Brazil, which is one of the most unequal countries in the world, and those inequalities are intersectional, considering race, gender, income, territorial origin. Therefore, AI applications will have different effects on different segments of the population. This is already clear from the use of facial recognition in public security: more than 90% of the individuals arrested using this technology were Black. Another example is the use of an algorithm to evaluate requests for emergency aid amid the pandemic, where many vulnerable people had their benefits denied based on incorrect data.

During its mandate, the Commission of Jurists held public hearings, invited specialists from different areas of knowledge, and developed a public consultation mechanism allowing for written proposals. Following this process, the new proposed bill had several elements that were very different from Bill 21/20. First, the new bill borrows from the EU’s AI Act by adopting a risk-based approach: obligations are distinguished according to the risks they pose. However, the new bill, following the Brazilian tradition of structuring regulation from the perspective of individual and collective rights, merges the European risk-based approach with a rights-based approach. The bill confers individual and collective rights that apply in relation to all AI systems, independent of the level of risk they pose.

Secondly, the new bill includes some additional obligations for the public sector, considering its differential impact on people’s rights. For example, there is a ban on the treatment of racial information, and provisions on public participation in decisions regarding the adoption of these systems. Importantly, though the Commission discussed the inclusion of a complete ban on facial recognition technologies in public spaces for public security, this proposal was not included: instead, the bill included a moratorium, establishing that a law must be approved regulating this use.

What the future holds for AI regulation in Brazil

After the Commission submitted its report, in May 2023 the president of the Senate presented a new bill for AI regulation replicating the Commission’s proposal. On 16th August 2023, the Senate established a temporary internal commission to discuss the different proposals for AI regulation that have been presented in the Senate to date.

It is difficult to predict what will happen following the end of the internal commission’s work, as political decisions will shape the next developments. However, what is important to have in mind is the progress that the discussion has reached so far, from an initial bill that was very minimal in scope, and supported the idea of minimal regulation, to one that is much more protective of individual and collective rights and considerate of Brazil’s particular socio-economic realities. Brazil has played an important progressive role historically in global discussions on the regulation of emerging technologies, for example with the discussions of its Marco Civil da Internet. As Mariana Valente put it, “Brazil has had in the past a very strong tradition of creative legislation for regulating technologies.” The Commission of Jurists’ proposal repositions Brazil in such a role.

September 28, 2023. Marina Garrote, LLM program, NYU School of Law whose research interests lie at the intersection of digital rights and social justice. Marina holds a bachelor and master’s degree from Universidade de São Paulo and previously worked at Data Privacy Brazil, a civil society association dedicated to public interest research on digital rights.

Contesting the Foundations of Digital Public Infrastructure

TECHNOLOGY AND HUMAN RIGHTS

Contesting the Foundations of Digital Public Infrastructure

What Digital ID Litigation Can Tell Us About the Future of Digital Government and Society

Many governments and international organizations have embraced the transformative potential of ‘digital public infrastructure’—a concept that refers to large-scale digital platforms run by or supported by governments, such as digital ID, digital payments, or data exchange platforms. However, many of these platforms remain heavily contested, and recent legal challenges in several countries have vividly demonstrated some of the risks and limitations of existing approaches.

In this short explainer, we discuss four case studies from Uganda, Mexico, Kenya, and Serbia, in which civil society organizations have brought legal challenges to contest initiatives to build digital public infrastructure. What connects the experiences in these countries is that efforts to introduce new national-scale digital platforms have had harmful impacts on the human rights of marginalized groups—impacts that, the litigants argue, were disregarded as governments rolled out these digital infrastructures, and which are wholly disproportionate to the purported benefits that these digital systems are supposed to bring.

These four examples therefore hold important lessons for policymakers, highlighting the urgent need for effective safeguards, mitigations, and remedies as the development and implementation of digital public infrastructure continues to accelerate.

The explainer document builds upon discussions we had during an event we hosted, entitled “Contesting the Foundations of Digital Public Infrastructure: What Digital ID Litigation Can Tell Us About the Future of Digital Government and Society,” where we brought together the civil society actors who have been litigating these four different cases.

August 28, 2023. Katelyn Cioffi, Victoria Adelmant, Danilo Ćurčić, Brian Kiira, Grecia Macías, and Yasah Musa

Law Clinics Condemn U.S. Government Support for Haiti’s Regime as Country Faces Human Rights and Humanitarian Catastrophe

HUMAN RIGHTS MOVEMENT

Law Clinics Condemn U.S. Government Support for Haiti’s Regime as Country Faces Human Rights and Humanitarian Catastrophe

To mark the second anniversary of the assassination of Haitian President Jovenel Moïse, the Global Justice Clinic and the International Human Rights Clinic at Harvard Law School submitted a letter to Secretary of State Antony Blinken and Assistant Secretary Brian Nichols calling on the U.S. government to cease to support the de facto Ariel Henry administration. Progress on human rights and security and a return to constitutional order will only be possible if Haitian people have the opportunity to change their government.

In the wake of Moïse’s murder and at the urging of the United States, Dr. Henry assumed leadership as de facto prime minister. The past two years, Dr. Henry has presided over a humanitarian and human rights catastrophe. He has consolidated power in what remains of Haiti’s institutions, and has proposed to amend the Constitution in an unlawful manner. Further, there is evidence that ties Dr. Henry to the assassination of President Moïse. Despite the monumental failure of Dr. Henry’s government, the United States continues to support this illegitimate and unpopular regime.

The letter declares that any transitional government must be evaluated against Haiti’s Constitution and established human rights principles. Proposals such as Dr. Henry’s that violate the spirit of the Constitution and further state capture cannot be a path to democracy.

This post was originally published as a press release on July 10, 2023 by the Global Justice Clinic at NYU School of Law, and the International Human Rights Clinic at Harvard Law School. 

Shaping Digital Identity Standards: An Explainer and Recommendations on Technical Standard-Setting for Digital Identity Systems.

TECHNOLOGY AND HUMAN RIGHTS

Shaping Digital Identity Standards

An Explainer and Recommendations on Technical Standard-Setting for Digital Identity Systems.

In April 2023, we submitted comments to the United States National Institute of Standards and Technology (NIST), to contribute to its Guidelines on Digital Identity. Given that the NIST guidelines are very technical — the Guidelines are written for a specialist audience — we published this short “explainer” document with the hope of providing a resource to empower other civil society organizations and public interest lawyers, to engage with technical standards-setting bodies to raise human rights concerns related to digitalization in the future. This document therefore sets out the importance of standards bodies, provides an accessible “explainer” on the Digital Identity Guidelines, and summarizes our comments and recommendations.

The National Institute of Standards and Technology (NIST), which is part of the U.S. Department of Commerce, is a prominent and powerful standards body. Its standards are influential, shaping the design of digital systems in the United States and elsewhere. Over the past few years, NIST has been in the process of creating and updating a set of official Guidelines on Digital Identity, which “present the process and technical requirements for meeting digital identity management assurance levels … including requirements for security and privacy as well as considerations for fostering equity and the usability of digital identity solutions and technology.”

The primary audiences for the Guidelines are IT professionals and senior administrators in U.S. federal agencies that utilize, maintain, or develop digital identity technologies to advance their mission. The Guidelines fall under a wider NIST initiative to design a Roadmap on Identity Access and Management that explores topics like accelerating adoption of mobile drivers licenses, expanding biometric measurement programs, promoting interoperability, and modernizing identity management for U.S. federal government employees and contractors.

This technical guidance is particularly influential, as it shapes decision-making surrounding the design and architecture of digital identity systems. Biometrics and identity and security companies frequently cite their compliance with NIST standards to promote their technology and to convince governments to purchase their hardware and software products to build digital identity systems. Other technical standards bodies look to NIST and cite NIST standards. These technical guidelines thus have a great deal of influence well beyond the United States, affecting what is deemed acceptable or not within digital identity systems, such as how and when biometrics can be used. . 

Such technical standards are therefore of vital relevance to all those who are working on digital identity. In particular, these standards warrant the attention of civil society organizations and groups who are concerned with the ways in which digital identity systems have been associated with discrimination, denial of services, violations of privacy and data protection, surveillance, and other human rights violations. Through this explainer, we hope to provide a resource that can be helpful to such organizations, enabling and encouraging them to contribute to technical standard-setting processes in the future and to bring human rights considerations and recommendations into the standards that shape the design of digital systems. 

Fair Pay for Public Defenders: If Mongolia Can Do It, Any Country Can

HUMAN RIGHTS MOVEMENT

Fair Pay for Public Defenders: If Mongolia can do it, any country can

On the first day of 2023, Mongolia’s public defenders received a 300% pay raise. A new law took effect on January 1st that ties the compensation of publicly funded defense attorneys to their courtroom counterparts, prosecutors. Although Mongolia ranks among the world’s poorest countries, it has achieved something that many of the world’s wealthiest states have failed to: pay equity between public defenders and public prosecutors.

Oyunchimeg Ayush (wearing blue in the photo), then the head of the Mongolian state agency responsible for public defense.

A central tenet of adversarial legal systems is that justice is best served when opposing sides are fairly matched. As the European Court of Human Rights put it, “[i]t is a fundamental aspect of the right to a fair trial that criminal proceedings…should be adversarial and that there should be equality of arms between the prosecution and defence.” Similarly, the Inter-American Court of Human Rights says that public defenders should be empowered to act “on equal terms with the prosecution.”

If the goal is a fair fight in the courtroom, it seems obvious that paying public defenders just a third of what prosecutors make would detract from that goal. Yet around the world, such pay disparities are commonplace, a phenomenon I saw firsthand as Global Policy Director for the International Legal Foundation, an NGO that builds public defender systems across the globe.

One reason for this disparity is that most domestic constitutions are silent on this issue. And even in the realm of international law, where the “equality of arms” principle is a well-established component of the bedrock international instrument on fair trial rights, courts have not interpreted this to require “material equality” between prosecution and defense. For example, this ICTR case found no fault with the fact that the prosecution’s team comprised 35 investigators deployed for several years, while the defense team had just two investigators paid to work for a few months. 

Instead, equality of arms is mainly conceived of in procedural terms, such as this HRC case where the court’s failure to allow defense counsel to cross-examine the victim was found to violate the principle. As applied to resources, equality of arms requires only that the resources available to the accused are “adequate” to present a full defense (as the Caribbean Court of Justice points out in §33).

Absent promising legal grounds, the battle for pay parity must be fought in the political arena. But there are major challenges here, too, mainly that elected officials are not usually keen on funding services for people accused of heinous crimes. Public defenders around the world have had to embrace vigorous strategies to compel political action, such as labor strikes and joining forces with prosecutors.

So how did Mongolia do it? Dedicated advocacy by a committed public official.

Oyunchimeg Ayush (wearing blue in the photo to the right), then the head of the state agency responsible for public defense, had grown tired of trying to recruit and retain qualified attorneys on salaries 70-80% lower than prosecutors and judges. She saw the unequal pay not only as unfair but as inefficient: high turnover increased recruitment and training costs and yielded a less-experienced workforce.

So, she started making her case for equal pay. She met with legislators, justice system stakeholders, and cabinet ministers, where she found a key ally in Khishgeegiin Nyambaatar, the Minister of Justice and Home Affairs. She also reached out to the ILF to ask for research on pay parity and examples of other jurisdictions who had achieved it. We pointed her to Argentina, which passed a parity law in 2015, and to the American state of Connecticut, which has had a parity law for 30 years and has been recognized for excellence. This partnership between local and international actors echoes the ongoing debate among human rights scholars like Gráinne de Búrca, Margaret Keck, Kathryn Sikkink and others about how human rights reform is actually achieved. Eventually, Mongolia’s Parliament, known as the Great Khural, amended the legal aid law to require that public defender wage rates equal those received by prosecutors. 

Mongolia’s achievement is all the more impressive in light of its economic constraints. The Mongolian government’s annual budget is roughly $6 billion. Juxtapose this with the American states of Florida and Oregon, whose failure to pass pay parity legislation in recent years was largely justified on budgetary grounds. Oregon’s annual budget? $67 billion. Florida’s? $101.5 billion

Though Mongolia’s achievement is monumental, even these reforms do not amount to true equality of arms between public defenders and prosecutors. In recent years, many commentators have argued that individual pay parity—between defense and prosecution lawyers—is insufficient to ensure an equal playing field. Instead, they argue that what is needed is institutional parity. For example, the leading international instrument on good practices for public defender systems calls for “fair and proportional distribution of funds between prosecution and legal aid agencies,” and the American Bar Association says that parity should extend beyond salaries to include workloads, technology, facilities, investigators, support staff, legal research tools, and access to forensic services and experts.

The inclusion of defense investigators is particularly important. Prosecutors aren’t the only government agents that help prosecute a criminal case. Much of the work of collecting evidence and facilitating witness testimony is done by the police. But police investigations are often subtly (or not subtly) shaped by the prosecution’s theory of the case, and police agencies have historically been less than eager to turn over exculpatory evidence. For this reason, public defender performance standards generally mandate that defense attorneys conduct their own independent investigations. A truer apples-to-apples comparison for public defense agency budgets should not only include the prosecution agency, but also some portion of the police budget, too. 

Mongolia’s revised law does not yet achieve parity on this institutional level, but individual parity is still a huge and significant step, one that is particularly remarkable in light of Mongolia’s economic constraints. Their achievement stands as an admonition to wealthier jurisdictions who claim that pay parity is too expensive. 

Congratulations to the members of the Great Khural, for passing this law; Minister Nyambaatar, for championing it; Oyunchimeg Ayush, for catalyzing this effort; and, above all, to the Mongolian public defenders whose pay finally reflects their vital role in achieving justice. 

May 19, 2023. Ben Polk, Bernstein Institute for Human Rights of NYU Law School. 

This post reflects the opinions of the author and not necessarily the views of NYU, NYU Law or the Center for Human Rights and Global Justice.

Relocation Now, Mine-Affected Communities in the D.R. and their Allies tell Barrick Gold

CLIMATE & ENVIRONMENT

Relocation Now, Mine-Affected Communities in the D.R. and their Allies tell Barrick Gold

As Barrick Gold prepares to hold its Annual General Meeting in Toronto tomorrow, Dominican communities impacted by the company’s Pueblo Viejo mine and their allies have issued an open letter to the company demanding immediate community relocation.

The letter from Espacio Nacional por la Transparencia en las Industrias Extractivas (National Space for Transparency in the Extractive Industry (ENTRE) and the Comité Nuevo Renacer, alleges grave harms to nearby communities’ health, livelihoods, and environment due to the mine’s operations. The letter also raises concerns about Barrick’s plans to expand the Pueblo Viejo mine––already one of the world’s largest gold mines–– including by constructing a new tailings dam. Dominican, Canadian, and U.S. based allies, including the Global Justice Clinic, signed on to the letter in solidarity.

Last month, communities affected by Barrick mines in Alaska, Argentina, the Dominican Republic, Nevada, Pakistan, Papua New Guinea, and the Philippines came together in a Global Week of Action, calling out the gap between Barrick’s rhetoric on human rights and its record. GJC works in solidarity with communities near Cotuí impacted by Barrick’s operations.

This post was originally published on May 1, 2023.