U.S. Government must adopt moratorium on mandatory use of biometric technologies in critical sectors, look to evidence abroad, urge human rights experts

As the White House Office of Science and Technology Policy (OSTP) embarks on an initiative to design a ‘Bill of Rights for an AI-Powered World,’ it must begin by immediately imposing a moratorium on the mandatory use of AI-enabled biometrics in critical sectors, such as health, social welfare programs, and education, argue a group of human rights experts at the Digital Welfare State & Human Rights Project (the DWS Project) at the Center for Human Rights and Global Justice at NYU School of Law, and the Institute for Law, Innovation & Technology (iLIT) at Temple University School of Law.

In a 10-page submission responding to OSTP’s Request for Information, the DWS Project and iLIT argue that biometric identification technologies such as facial recognition and fingerprint-based recognition pose existential threats to human rights, democracy, and the rule of law. Drawing on comparative research and consultation with some of the leading international experts on biometrics and human rights, the submission details evidence of some of the concerns raised in countries including Ireland, India, Uganda, and Kenya. It catalogues the often-catastrophic effects of biometric failure, of unwieldly administrative requirements imposed on public services, and the pervasive lack of legal remedies and basic transparency about use of biometrics in government.

“We now have a great deal of evidence about the ways that biometric identification can exclude and discriminate, denying entire groups access to basic social rights,” said Katelyn Cioffi, a Research Scholar at the DWS Project, “Under many biometric identification systems, you can be denied health care, access to education, or even a drivers’ license, if you are not able or willing to authenticate aspects of your identity biometrically.” An AI Bill of Rights that allows for equal enjoyment of rights must learn from comparative examples, the submission argues, and ensure that AI-enabled biometrics do not merely perpetuate systematic discrimination. This means looking beyond frequently-raised concerns about surveillance and privacy, to how biometric technologies affect social rights such as health, social security, education, housing, and employment.

A key factor of success for the initiative will be much-needed legal and regulatory reform across the United States federal system. “This initiative represents an opportunity for the U.S. government to examine the shortcomings of current laws and regulations, including equal protection, civil rights laws, and administrative law,” Laura Bingham, Executive Director of iLIT stated. “The protections that Americans depend on fail to provide the necessary legal tools to defend their rights and safeguard democratic institutions in a society that increasingly relies on digital technologies to make critical decisions.”

The submission also urges the White House to place constraints on the actions of the U.S. government and U.S. companies abroad. “The United States plays a major role in the development and uptake of biometric technologies globally, through its foreign investment, foreign policy, and development aid,” said Victoria Adelmant, a Research Scholar at the DWS Project. “As the government moves to regulate biometric technologies, it must not ignore U.S. companies’ roles in developing, selling, and promoting such technologies abroad, as well as the government’s own actions in spheres such as international development, defense, and migration.”

For the government to mount an effective response to these harms, the experts argue that it must also take heed of parallel efforts of other powerful political actors, including China and the European Union, which are currently attempting to regulate biometric technologies. However, it must also avoid a race to the bottom or jump into a perceived ‘arms race’ with countries like China, by pursuing an increasingly securitized biometric state and allowing the private sector to continue its unfettered ‘self-regulation’ and experimentation. Instead, the U.S. government should focus on acting as a global leader in enabling human rights-sustaining technological innovation.

The submission makes the following recommendations:

  1. Impose an immediate moratorium on the use of biometric technologies in critical sectors: biometric identification should never be mandatory in critical sectors such as education, welfare benefits programs, or healthcare.
  2. Propose and enact legislation to address the indirect and disparate impact of biometrics.
  3. Engage in further review and study of the human rights impacts of biometric technologies as well as of different legal and regulatory approaches.
  4. Build a comprehensive legal and regulatory approach that addresses the complex, systemic concerns raised by AI-enabled biometric identification technologies.
  5. Ensure that any new laws, regulations, and policies are subject to a democratic, transparent, and open process.
  6. Ensure that public education materials and any new laws, regulations, and policies are described and written in clear, non-technical, and easily accessible language.

For more information, please contact:

  • Katelyn Cioffi (, Twitter: @katelyncioffi
  • Victoria Adelmant (, Twitter: @VictoriaAdamant
  • Laura Bingham (, Twitter: @laurambing

The Digital Welfare State and Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law aims to investigate systems of social protection and assistance in countries worldwide that are increasingly driven by digital data and technologies. Follow them on twitter: @humanrightsnyu

The Temple University Institute for Law, Innovation & Technology (iLIT) at Beasley School of Law pursues action research, experiential instruction, and advocacy with a mission to deliver equity, bridge academic and practical boundaries, and inform new approaches to technological innovation in the public interest.

January 17, 2022.