“Leapfrogging” to Digital Financial Inclusion through “Moonshot” Initiatives

TECHNOLOGY & HUMAN RIGHTS

“Leapfrogging” to Digital Financial Inclusion through “Moonshot” Initiatives

The notion that new technological solutions can overcome entrenched exclusion from banking services and fair credit is quickly gaining widespread acceptance. But tech-based “fixes” often funnel low-income groups into separate, inferior systems and create new tech-driven divisions.

In July 2021, the New York City Mayor’s Office of the Chief Technology Officer launched the NYC[x] Moonshot: Financial Inclusion Challenge. This initiative seeks to deploy digital solutions to address inequalities in access to financial institutions. As the Chief Technology Officer stated, “Too many people have been left out of the financial system for too long. This disparity means that financial transactions … end up costing more for those who can least afford it.”

One in ten Americans are “unbanked,” meaning that they do not have a bank account. People of color are disproportionately excluded from traditional financial institutions. Banks consistently operate fewer branches in Black, Native American, and Latinx communities, creating “banking deserts,” while the practice of redlining continues. Poorly-regulated predatory financial institutions such as payday lenders, which impose higher costs than banks and trap customers in cycles of debt, are highly concentrated in these communities and take advantage of financial exclusion. In New York’s borough of the Bronx, over 49% of households are unbanked and high-cost lenders significantly outnumber banks.

Unequal access to banking means unequal access to fair credit. This compounds inequalities, as a poor credit record increasingly determines crucial outcomes, including higher interest rates on loans, higher insurance premiums, and difficulty obtaining employment or housing.

NYC is pursuing technology-based solutions to address these issues. The Moonshot initiative, which seeks proposalsutilizing breakthrough financial inclusion technology” to bring the unbanked into the financial system, follows previous tech-driven schemes. A recent initiative involved IDNYC, the city’s official identification card launched in 2015. This ID scheme had sought to facilitate access to banking by providing government-issued IDs to groups previously unable to open bank accounts for want of official identification; the ID is explicitly available to undocumented immigrants. However, shortly after its launch, the city’s largest banks dealt a blow to the IDNYC scheme by refusing to accept it as sufficient identification to open accounts. In response, the Mayor’s Office turned to technology. In 2018, it solicited proposals from financial firms to introduce electronic chips—the same smartcards used in debit cards—into the ID cards. This would allow IDNYC cardholders to load money onto their ID cards and make payments using these cards. Such reloadable cards are known as prepaid cards.

This proposed integration of identification and payment functions was not unique. In the U.S., the city of Oakland’s municipal identification scheme enabled cardholders to have their welfare benefits deposited onto the ID card and make payments with it. Also in California, the city of Richmond’s ID similarly functions as a prepaid card. In 2020, MasterCard’s “City Key” card, which combines official identification and payments, was distributed to low-income residents in Honolulu. Outside of the U.S., MasterCard was involved in adding electronic chips to national ID cards in Nigeria, and the Malaysian national ID also functions as a reloadable debit card.

But the proposal to incorporate smartcards into IDNYC was abandoned. Dozens of immigrants’ rights organizations warned that the integration of payment functions increased immigrant cardholders’ risk of surveillance and profiling. Adding the chip would lead to “massive data collection” by the financial technology firm brought into IDNYC and, because such firms are legally required to retain information about cardholders, undocumented immigrants’ data could be subpoenaed by the Trump administration. The Mayor’s Office accepted that these risks were fundamentally in conflict with the inclusionary goals of IDNYC and withdrew the plan.

While the proposal was abandoned, the narratives and driving forces behind it have intensified. Turning to a prepaid card system to “eliminate banking deserts” in NYC followed a well-established script that promises to “leapfrog” over deeply-rooted social problems using new technologies. The Gates Foundation, McKinsey, MasterCard, and others have long furthered this narrative that groups left behind by traditional financial institutions can be reached through innovative technological solutions which “leapfrog” banks. Bill Gates was famously quoted saying, “banking is necessary but banks are not”—and today, actors which are not banks, such as payment technology companies and telecommunications firms, are increasingly offering “financially-inclusive” services such as mobile money and smartcard solutions in explicit efforts “to ‘disrupt’… traditional banking services.” Prepaid cards especially seek to bypass banks: by their very design they operate without any link to bank accounts.

As such, these technological solutions funnel unbanked groups into a separate, “parallel banking system.” Prepaid cards do not provide access to bank accounts, so cardholders remain unbanked. This is an inferior banking product; cardholders do not gain the same access to the services and fairer credit that bank accounts enable. Financial inclusion persists, but the unbanked now have smartcards.

Further, the companies “disrupting” banking are usually not subject to the same legal obligations as banks, nor do they provide the same financial protections. Within these separate, technology-enabled payment systems for the unbanked, the extractivism and predatory practices that financial inclusion efforts are supposed to address re-emerge. NYC’s Chief Technology Officer had lamented that financial exclusion means that transactions cost “more for those who can least afford it”—but when Oakland launched its smartcard ID, the company running the prepaid function levied countless fees on cardholders, including $0.75 per transaction, $1 per reloading of funds, and a $2.99 monthly fee. The fees were higher than those of banks. Further, the insistence that electronic payments will solve financial exclusion is motivated by a desire to monetize new customers’ transaction data. Companies are racing to “capture the data of the newly ‘included’” and uncover the “financial lives of the poor” as a new market segment.

As the Immigrant Defense Project and others argued, turning IDNYC into a prepaid card would therefore “be perpetuating, not resolving, inequality in our banking system.” Within our work outside the U.S., we see the same technological solutions being embraced, all while they siphon low-income groups toward less-regulated, separate systems. For example, in South Africa and Australia, recipients of state benefits are forced onto prepaid cards not linked to traditional bank accounts. Still, “digital financial inclusion” through these technologies is being hailed as the solution to financial exclusion.

The 2021 Moonshot initiative appears to be based on the same ideals. The very notion of a “moonshot” is solutionist—it connotes a monumental (technologically-driven) effort to achieve a lofty goal. Official “launch” documents state that technology can “help solve the most pressing issues of people’s lives.” Rather than seeking to work with banks, the scheme turns to developers: the unbanked need “new options.” This focus on technology can obscure the root causes of financial exclusion—namely racism, discrimination, and predatory financial practices. “New options” will too often mean separate, inferior systems; and eschewing attempts to resolve inequalities within the “old options” leaves harmful practices—such as the linking of everything from housing to insurance with credit reports, continuing redlining, and the closing of bank branches without regard for those left behind—unaddressed. 

September 21, 2021. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Wrong Prescription: The Impact of Privatizing Healthcare in Kenya

INEQUALITIES

Wrong Prescription: The Impact of Privatizing Healthcare in Kenya

A collaboration between The Economic and Social Rights Centre-Hakijamii and the Center for Human Rights and Global Justice at New York University School of Law.

The 49-page report draws from more than 180 interviews with healthcare users and providers, government officials, and experts, and finds that the government-backed expansion of the private healthcare sector in Kenya is leading to exclusion and setting back the country’s goal of universal health coverage. 

The report documents how policies designed to increase private sector participation in health, in combination with chronic underinvestment in the public healthcare system, have led to a rapid increase in the role of for-profit private actors and undermined the right to health. Privatizing healthcare has proven costly for individuals and the government, and pushed Kenyans into poverty and crushing debt. While the wealthy may be able to access high-quality private care, for many, particularly in lower-income areas, the private sector offers low-quality services that may be inadequate or unsafe. The report concludes with a call to prioritize the public healthcare system.

Everyone Counts! Ensuring that the human rights of all are respected in digital ID systems

TECHNOLOGY & HUMAN RIGHTS

Everyone Counts! Ensuring that the human rights of all are respected in digital ID systems

The Everyone Counts! initiative was launched in the fall of 2020 with a firm commitment to a simple principle: the digital transformation of the state can only qualify as a success if everyone’s human rights are respected. Nowhere is this more urgent than in the context of so-called digital ID systems.

Research, litigation and broader advocacy on digital ID in countries like India and Kenya has already revealed the dangers of exclusion from digital ID for ethnic minority groups[1] and for people living in poverty.[2] However, a significant gap still exists between the magnitude of the human rights risks involved and the urgency of research and action on digital ID in many countries. Despite their active promotion and use by governments, international organizations and the private sector, in many cases we simply do not know how these digital ID systems lead to social exclusion and human rights violations, especially for the poorest and most marginalized.

Therefore, the Everyone Counts! initiative aims to engage in both research and action to address social exclusion and related human rights violations that are facilitated by government-sponsored digital ID systems.

Does the emperor have new clothes? The yawning evidence gap on digital ID

The common narrative behind the rush towards digital ID systems, especially in the Global South, is by now familiar: “As many as 1 billion people across the world do not have basic proof of identity, which is essential for protecting their rights and enabling access to services and opportunities.”[3] Digital ID is presented as a key solution to this problem, while simultaneously promising lower income countries the opportunity to “leapfrog” years of development via digital systems that assist in “improving governance and service delivery, increasing financial inclusion, reducing gender inequalities by empowering women and girls, and increasing access to health services and social safety nets for the poor.”[4]

This perspective, for which the World Bank and its Identification for Development (ID4D) Initiative have become the official “anchor” internationally, presents digital ID systems as a force for good. The Bank acknowledges that exclusionary issues may arise, but is confident that such issues may be overcome through good intentions and safeguards. Digging underneath the surface of these confident assertions, however, one finds that there appears to be remarkably little research into the overall impact of digital ID systems on social exclusion and a range of related human rights. For instance, after entering the digital ID space in 2014, publishing prolifically, and guiding billions of development dollars into furthering this agenda, the World Bank’s ID4D team concedes in its 2020 Annual Report that “given that this topic is relatively new to the development agenda, empirical research that rigorously evaluates the impact of ID systems on development outcomes and the effectiveness of strategies to mitigate risks has been limited.”[5] In other words, despite warning signs from several countries around the world, including chilling stories of people who have died because they were shut out of biometric ID systems,[6] the digital ID agenda moves full steam ahead without full understanding of its exclusionary potential.

Making sure that everyone truly counts

While the Everyone Counts! initiative only has a fraction of the resources of ID4D, we hope to inject some much needed reality into this discourse through our work. We will do this by undertaking–together with research partners in different countries–empirical human rights research that investigates how the introduction of a digital ID system leads to or exacerbates social exclusion. For example, we are currently undertaking a joint research project with Ugandan research partners focused on Uganda’s digital ID system, Ndaga Muntu, and its impact on poor women’s right to health, and older persons’ right to social assistance.

Our presence at a leading university and law school underlines our commitment to high quality and cutting-edge research, but we are not in the business of knowledge accumulation purely for its own sake. We will aim to transform our research into action. This could come in the form of strategic litigation and advocacy, such as the work by our partners described below, or in the form of network building and information sharing. For instance, together with co-sponsors like the UN Economic Commission for Africa (UNECA) and the Open Society Justice Initiative (OSJI), we are hosting a workshop series for African civil society organizations on digital ID and exclusion. The series creates a space where activists hoping to resist the exclusion associated with digital ID can come together, gain access to tools, information and networks, and form a community of practice that facilitates further activism.

Ensuring non-discriminatory access to vaccines: An early case study 

A recent example from Uganda demonstrates just how effective targeted action against digital ID systems can be. The government began rollout of its national digital ID system Ndaga Muntu as early as 2015, and it has gradually become a mandatory requirement to access a range of social services in Uganda.

To address the threat of COVID-19, the Ugandan government recently began a free, national vaccine program. One of the groups eligible to receive the vaccine would be all adults over the age of 50. On March 2, however, the Ugandan Minister of Health announced that only those Ugandan citizens who could produce a Ndaga Muntucard, or at least a national ID number (NIN), would be able to receive the vaccine. Conservative estimates suggest that over 7 million eligible Ugandans have not yet received their national ID card.

Our research partners, the Initiative for Social and Economic Rights (ISER) and Unwanted Witness (UW), sued the Ugandan government on March 5 to challenge the mandatory requirement of the Ndaga Muntu.[7] They argued that not only would the requirement of the national ID in exclude millions of eligible older persons from receiving the vaccine, but also that it would set a dangerous precedent that would allow for further discrimination in other areas of social services.[8]

On March 9, the Ministry of Health announced that it would change the national ID requirement so that alternative forms of identification documents, which are much more accessible to poor Ugandans, could be used to access the COVID-19 vaccine.[9] This was a critical victory for the millions of Ugandans who seek access to the life-saving vaccine–but it is also a warning sign of the subtle and pernicious ways that the digital ID system may be used to exclude.

Humans first, not systems first

The Ugandan case study shows the urgent need for the human rights movement to engage in discussions about digital transformation so that fundamental rights are not lost in the rush to build a “modern, digital state.” In our work on this initiative, we will remain similarly committed to prioritizing how individual human beings are affected by digital ID systems. Listening to their stories, understanding the harms they experience, and channeling their anger and frustration to other, more privileged and powerful audiences, is our core purpose.

Digital transformation is a field prone to a utilitarian logic: “if 99% of the population is able to register for a digital ID system, we should celebrate it as a success.” Our qualitative work does not only challenge the supposed benefits for these 99%, but emphasizes that the remaining 1% equals a multitude of individual human beings who may be victimized. Our research so far has only confirmed our intuition that digital ID systems can deliver significant harms, particularly for those who are poorest, most vulnerable, and least powerful in society. These excluded voices deserve to be heard and to become a decisive factor in deciding the shape of our digital future.

April 6, 2021. Christiaan van Veen and Katelyn Cioffi.

Christiaan van Veen, Director of the Digital Welfare State and Human Rights Project (2019-2022) at the Center for Human Rights and Global Justice at NYU School of Law. 

Katelyn Cioffi, Senior Research Scholar, Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law.

Prominent human rights expert admitted as amicus curiae in groundbreaking legal challenge to Ugandan national digital ID system

TECHNOLOGY & HUMAN RIGHTS

Prominent human rights expert admitted as amicus curiae in groundbreaking legal challenge to Ugandan national digital ID system

Today, at the High Court of Uganda in Kampala, the Hon. Justice Boniface Wamala issued a decision to admit the application of Professor Philip Alston of New York University School of Law to participate as amicus curiae, or ‘friend of the court’, in a petition for the enforcement of human rights challenging the use of the country’s national digital ID system as a pre-condition to access to public services.

The admission of the amicus application is a critical development in this groundbreaking litigation, the latest in a series of legal challenges that have raised concerns about national digital ID systems in countries including India, Kenya, and Jamaica. This case is one of the first globally to center concerns around social and economic rights. The applicants, three Ugandan civil society organizations, argue that the national digital ID system suffers from persistent and severe gaps in coverage, and its integration with the country’s social welfare programs has resulted in the exclusion of vulnerable and marginalized individuals from fundamental services such as social protection and healthcare.

“Given the importance of the national digital ID system and its mandatory usage, it is imperative that it is fully inclusive. All Ugandans, regardless of age or economic status, must be able to access their social welfare benefits,” said Professor Alston. “Today’s decision by the High Court is an important and welcome step in that direction.”

In a 32-page brief, Professor Alston seeks to assist the court in analyzing some of the novel legal questions at the heart of the case. He calls attention to the obligations of the Government of Uganda under international human rights law, the serious consequences that digital and non-digital barriers to public services may have on the enjoyment of rights, and the high burden of proof that falls on the government to justify any measure that leads to exclusion. The brief also emphasizes the need to ensure equal treatment and non-discrimination in the enjoyment of these rights, particularly given the high risk that any negative impacts of the digital ID system will continue to fall disproportionately on poor and marginalized groups.

“As many governments turn to digital ID systems to mediate access to essential public services, there is an urgent need for courts to ensure the protection of economic and social rights,” said Professor Alston.

Setting aside the objections of the two government respondents, the Attorney General and the National Identification & Registration Authority, Judge Boniface Wamala stated that the “positive benefits of the intervention as amicus curiae outweighs any possible opposition from the parties in the main cause. It is in public interest, the interest of justice, the protection and progressive development of human rights and socio-economic reform that the leave sought in the application is granted.”

“The court and by extension the multitude of Ugandans whose human rights the main petition is fighting to protect shall benefit from the input and expertise that Prof. Philip shall contribute in its adjudication,” said Counsel Elijah Enyimu, who represented Professor Alston. “The contents of the amicus brief shall be elucidatory on the standards and protections necessary for the realization of ESCR in Uganda.”

The Applicants and Respondents will be back in court to argue their cases on April 5, 2023. In the meantime, those who have missed out on social protection payments or been turned away from health centers due to their inability to access the national digital ID will continue to wait for a judicial decision.

This post was originally published as a press statement on March 24, 2023. 

The World Bank and co. may be paving a ‘Digital Road to Hell’ with support for dangerous digital ID

TECHNOLOGY & HUMAN RIGHTS

The World Bank and co. may be paving a ‘Digital Road to Hell’ with support for dangerous digital ID

Global actors, led by the World Bank, are energetically promoting biometric and other digital ID systems that are increasingly linked to large-scale human rights violations, especially in the Global South. A report by researchers at New York University warns that these systems, promoted in the name of development and inclusion, might be achieving neither. Rather than the equitable digital future envisioned by the World Bank and its Identification for Development (ID4D) Initiative, the report argues that “despite undoubted good intentions on the part of some, [these systems] may well be paving a digital road to hell.”

Report cover: Paving a digital road to hell?

The report, at over 100 pages, is intended to be a “carefully researched primer as well as a call to action to all of those with an interest in safeguarding human rights to set their gaze more firmly on the multidimensional dangers associated with digital ID systems.” Governments around the world have been investing heavily in digital identification systems, often with biometric components (digital ID). The rapid proliferation of such systems is driven by a new development consensus, packaged and promoted by key global actors like the World Bank, but also by governments, foundations, vendors and consulting firms. This new ‘manufactured consensus’ holds that digital ID can contribute to inclusive and sustainable development—and is even a prerequisite for the realization of human rights.

Drawing inspiration from the Aadhaar system in India, the dangerous digital ID model that is being promoted prioritizes what the primer refers to as an ‘economic identity’.  The goal of such systems is primarily to establish ‘uniqueness’ of individuals, commonly with the help of biometric technologies. The ultimate objective of such digital ID systems is to facilitate economic transactions and private sector service delivery while also bringing new, poorer, individuals into formal economies and ‘unlocking’ their behavioral data. As the Executive Chairman of the influential ID4Africa, a platform where African governments and major companies in the digital ID market meet, put it at the start of its 2022 Annual Meeting earlier this week, digital ID is no longer about identity alone but “enables and interacts with authentication platforms, payments systems, digital signatures, data sharing, KYC systems, consent management and sectoral delivery platforms.”

Unlike ‘traditional systems’ of civil registration, such as birth registration, this new model of economic identity commonly sidesteps difficult questions about the legal status of those it registers and the rights associated with that status. The promises of inclusion and flourishing digital economies might appear attractive on paper, but digital ID systems have consistently failed to deliver on these promises in real world situations, especially for the most marginalized. In fact, evidence is emerging from many countries, most notably the mega digital ID project Aadhaar in India, of the severe and large-scale human rights violations linked to this model. These systems may in fact exacerbate pre-existing forms of exclusion and discrimination in public and private services. The use of new technologies may furthermore lead to novel forms of harm, including biometric exclusion, discrimination, and the many harms associated with “surveillance capitalism.”

Meanwhile, the benefits of digital ID remain ill-defined and poorly documented. From what evidence does exist, it seems that those who stand to benefit most may not be those “left behind”,but instead a small group of companies and governments. After all, where digital ID systems have tended to excel is in generating lucrative contracts for biometrics companies and enhancing the surveillance and migration-control capabilities of governments.

With such powerful backing, digital ID has taken on the guise of an unstoppable juggernaut and inevitable hallmark of modernity and development in the 21st century, and the dissenting voices of civil society have been written off as Luddites and barriers to progress. Nevertheless, the report calls on human rights organizations, other civil society organizations, and advocates who may have been on the sidelines of these debates to get more involved.  The actual and potential human rights violations arising from this model of digital ID can be severe and potentially irreversible. The human rights community can play an important role in ensuring that such transformational changes are not rushed and are based on serious evidence and analysis. It can also ensure that there is sufficient public debate, with full transparency and involving all relevant stakeholders, not in the least the most marginalized and most affected individuals. Where necessary to safeguard human rights, such dangerous digital ID systems should be stopped altogether.

This post was originally published as a press release on June 17, 2022.

User-friendly Digital Government? A Recap of Our Conversation About Universal Credit in the United Kingdom

TECHNOLOGY & HUMAN RIGHTS

User-friendly Digital Government? A Recap of Our Conversation About Universal Credit in the United Kingdom

On September 30, 2020, the  Digital Welfare State and Human Rights Project hosted the first in its series of virtual conversations entitled “Transformer States: A Conversation Series on Digital Government and Human Rights” exploring the digital transformation of governments around the world. In this first iteration of the series, Christiaan van Veen and Victoria Adelmant interviewed Richard Pope, part of the founding team at the UK Government Digital Service and author of Universal Credit: Digital Welfare. In interviewing a technologist who worked with policy and delivery teams across the UK government to redesign government services, the event sought to explore the promise and realities of digitalized benefits. 

Universal Credit (UC), the main working-age benefit for the UK population, represents at once a major political reform and an ambitious digitization project. UC is a “digital by default” benefit in that claims are filed and managed via an online account, and calculations of recipients’ entitlements are also reliant on large-scale automation within government. The Department for Work and Pensions (DWP), the department responsible for welfare in the UK, repurposed the taxation office’s Real-Time Information (RTI) system, which already collected information about employees’ earnings for the purposes of taxation, in order to feed this data about wages into an automated calculation of individual benefit levels. The amount a recipient receives each month from UC is calculated on the basis of this “real-time feed” of information about her earnings as well as on the basis of a long list of data points about her circumstances, including how many children she has, her health situation and her housing. UC is therefore ‘dynamic,’ as the monthly payment that recipients receive fluctuates. Readers can find a more comprehensive explanation of how UC works in Richard’s report.

One “promise” surrounding UC was that it would make interaction with the British welfare system more user-friendly. The 2010 White Paper launching the reforms noted that it would ‘cut through the complexity of the existing system’ through introducing online systems which would be “simpler and easier to understand” and “intuitive.” Richard explained that the design of UC was influenced by broader developments surrounding the government’s digital transformation agenda, whereby “user-centered design” and “agile development” became the norm across government in the design of new digital services. This approach seeks to place the needs of users first and to design around those needs. It also favors an “agile,” iterative way of working rather than designing an entire system upfront (the “waterfall” approach).

Richard explained that DWP designs the UC software itself and releases updates to the software every two weeks: “They will do prototyping, they will do user research based on that prototyping, they will then deploy those changes, and they will then write a report to check that it had the desired outcome,” he said. Through this iterative, agile approach, government has more flexibility and is better able to respond to “unknowns.” Once such ‘unknown’ is the Covid-19 pandemic, and as the UK “locked down” in March, almost a million new claims for UC were successfully processed in the space of just two weeks. Not only would the old, pre-UC system have been unlikely to have been able to meet this surge, this has also compared very favorably with the failures seen in some US states—some New Yorkers, for example, were required to fax their applications for unemployment benefit.

The conversation then turned to the reality of UC from the perspective of recipients. For example, half of claimants were unable to make their claim online without help, and DWP was recently required by a tribunal to release figures which show that hundreds of thousands of claims are abandoned each year. The ‘digital first’ principle as applied to UC, in effect requiring all applicants to claim online and offering inadequate alternatives, has been particularly harmful in light of the UK’s ‘digital divide.’ Richard underlined that there is an information problem here – why are those applications being abandoned? We cannot be certain that the sole cause is a lack of digital skills. Perhaps people are put off by the large quantity of information about their lives they are required to enter into the digital system, or people get a job before completing the application, or they realize how little payment they will receive, or that they will have to wait around five weeks to receive any payment.

But had the UK government not been overly optimistic about future UC users’ access and ability to use digital systems? For example, the 2012 DWP Digital Strategy stated that “most of our customers and claimants are already online and more are moving online all the time” while only half of all adults with an annual household income between £6,000-£10,000 have an internet connection either via broadband or smartphone. Richard agreed that the government had been over-optimistic, but pointed again to the fact that we do not know why users abandon applications or struggle with the claim, such that it is “difficult to unpick which elements of those problems are down to the technology, which elements are down to the complexity of the policy, and which elements are down to a lack of digital skills.”

This question of attributing problems to policy rather than to the technology was a crucial theme throughout the conversation. Organizations such as the Child Poverty Action Group have pointed to instances in which the technology itself causes problems, identifying ways in which the UC interface is not user-friendly, for example. CPAG was commended in the discussion for having “started to care about design” and proposing specific design changes in its reports. Richard noted that certain elements which were not incorporated into the digital design of UC, and elements which were not automated at all, highlight choices which have been made. For example, the system does not display information about additional entitlements, such as transport passes or free prescriptions and dental care, for which UC applicants may be eligible. The fact that the technological design of the system did not feature information about these entitlements demonstrates the importance and power of design choices, but it is unclear whether such design choices were the result of political decisions, or simply omissions by technologists.

Richard noted that some of the political aims towards which UC is directed are in tension with the attempt to use technology to reduce administrative burdens on claimants and to make the welfare state more user-friendly. Though the ‘design culture’ among civil servants genuinely seeks to make things easier for the public, political priorities push in different directions. UC is “hyper means-tested”: it demands a huge amount of data points to calculate a claimant’s entitlement, and it seeks to reward or punish certain behaviors, such as rewarding two-parent families. If policymakers want a system that demands this level of control and sorting of claimants, then the system will place additional administrative burdens on applicants as they have more paperwork to find, they have to contact their landlord to get a signed copy of their lease, and so forth. Wanting this level of means-testing will result in a complex policy and “there is only so much a designer can do to design away that complexity”, as Richard underlined. That said, Richard also argued that part of the problem here is that government has treated policy and the delivery of services as separate. Design and delivery teams hold “immense power” and designers’ choices will be “increasingly powerful as we digitize more important, high-stakes public services.” He noted, “increasingly, policy and delivery are the same thing.”

Richard therefore promotes “government as a platform.” He highlighted the need for a rethink about how the government organizes its work and argued that government should prioritize shared reusable components and definitive data sources. It should seek to break down data silos between departments and have information fed to government directly from various organizations or companies, rather than asking individuals to fill out endless forms. If such an approach were adopted, Richard claimed, digitalization could hugely reduce the burdens on individuals. But, should we go in that direction, it is vital that government become much more transparent around its digital services. There is, as ever, an increasing information asymmetry between government and individuals, and this transparency will be especially important as services become ever-more personalized. Without more transparency about technological design within government, we risk losing a shared experience and shared understanding of how public services work and, ultimately, the capacity to hold government accountable.

October 14, 2020. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

The Aadhaar Mirage: A Second Look at the World Bank’s “Model” for Digital ID Systems

TECHNOLOGY & HUMAN RIGHTS

The Aadhaar Mirage: A Second Look at the World Bank’s “Model” for Digital ID Systems 

Drawing inspiration from India’s Aadhaar system, the World Bank is promoting a dangerous digital ID model in the name of providing “a legal identity for all.” But rather than providing a model, Aadhaar is merely a mirage—an illusion of inclusiveness, accuracy, and universal identity.

Last month saw the publication of a report on the World Bank’s ill-conceived approach to digital ID, described as “essential reading for all concerned about human rights and development” by former UN Special Rapporteur on Extreme Poverty and Human Rights Philip Alston. As the press release summarizes:

Governments around the world have been investing heavily in digital identification systems, often with biometric components (digital ID). The rapid proliferation of such systems is driven by a new development consensus, packaged and promoted by key global actors like the World Bank, but also by governments, foundations, vendors, and consulting firms. This new ‘manufactured consensus’ holds that digital ID can contribute to inclusive and sustainable development—and is even a prerequisite for the realization of human rights.”

The report argues that India’s digital identification system has been central to the formation and promotion of this consensus. This has also been increasingly clear to me in my experience as an economist and identity management consultant who has provided advisory services to the World Bank. For the World Bankand particularly its Identification for Development (ID4D) cross-sectoral practicethe Indian system, named Aadhaar, has become the singular answer to development and a key source of inspiration. This continues irrespective of the body of evidence which shows how poorly a “fit” the Aadhaar system is for identity management in India, and even more so elsewhere. Aadhaar represents a mirage: it is not evidencing the universality, inclusiveness, unprecedented enrollment speed, meaningful legal identity, nor accuracy that it is claimed to represent.

The World Bank’s own data on the completeness of ID systems displays the “20/80-rule”: the overwhelming odds are that digital ID systems not building on a functional civil registration system (in which births, deaths, marriages and so forth are recorded) will exclude 20% or more of (mostly vulnerable) people, or they will take at least 80 years to cover all. Many developing countries often abandon underperforming ID-systems obtained at great cost, only to launch new and even more sophisticated systems. Instead of using existing service infrastructure for civil registration, new digital ID systems are rolled out through a quick fix “mobile campaign,” held once or twice, with mobile enrollment kits and temporary enrollment staff. But this invariably leaves a coverage and service void behind.

But what about Aadhaar, then? Hasn’t Aadhaar enrolled almost all of the Indian population (1.29 billion by March 2021, out of 1.39 billion), in just a decade (from September 2010­), at minimal cost (USD $1.60/enrollment)? If one believes the data from the Unique Identification Authority of India (UIDAI), then yes. But independent data are unavailable; UIDAI controls the message—even the Comptroller and Auditor General of India (CAG) had to use UIDAI data for its first ever audit of Aadhaar. Still, CAG found that UIDAI’s operational and financial management have been utterly deficient. Claims about Aadhaar’s impressive coverage and universality might, then, be questionable. Neither is the database accurate: the Aadhaar system has no way of weeding out dead enrollees (about 80 million in 10 years) or people leaving India (including Indian citizens). CAG also found UIDAI’s digital archiving and its collection and storage of the physical documents that back up enrollments to be inadequate.

Furthermore, claims about the uniqueness guaranteed by biometric technologies within Aadhaar are also illusory. There is no uniqueness for the approximately 25 million children under five years old enrolled in the database. Multiple Aadhaars were issued to the same persons, while different Aadhaar numbers associated with the same biometric data were issued to multiple people. Fingerprint authentication success for 2020-21 was only (an unverifiable) 74-76%. This may well be the canary in the coalmine, indicating exaggerated coverage claims for Aadhaar. Indeed, a Privacy International study explains the very statistical impossibility of a unique biometric profile in a population of 1.39 billion people. Rather, each Indian person has an average of 17,500 indistinguishable biometric “doubles.”

These claims about the benefits of biometrics have far-reaching implications as Aadhaar is linked to other areas of governance. A new law provides for the use of Aadhaar to verify the electoral roll. Weeding out “ghost entries” when the uniqueness and de-duplicated nature of the Aadhaar database is disproved is a doomed exercise, and represents another potential threat to India’s democracy.

Aadhaar’s “big numbers” are a mirage too. Proponents claim that over a billion were newly enrolled at record speed at low cost. But this is not as unprecedented as is suggested. For elections in India, 900 million voters are registered or verified every five years—which tops Aadhaar’s enrollment accomplishment. And India’s bureaucracy has long provided multiple forms of documentation; for proof of identity, date of birth, and address, enrollees can choose from a menu of no less than 106 valid documents. Less than 3 in 10,000 enrollees lacked valid ID prior to Aadhaar enrollment by 2016. The Aadhaar system is a duplication which simply adds on biometrics—which, as we saw, are not the holy grail they are claimed to be. To suggest that other countries, which do not have this multitude of breeder documents and existing enrollment capacities, can copy the Aadhaar approach and obtain widespread coverage, is an illusion.

In respect of claims that Aadhaar brings down costs and increases efficiencies: these costs are applicable only in India. I have found that digital ID systems in many African countries cost 5 to 10 times more per capita than India’s ID system. The high failure rates of ID-systems in many developing countries add to the unbearable costs for poorer countries and their more vulnerable people.

This cries out for a better identity management model—one that is centered around citizenship, with civil registration as the foundation, which seeks to guarantee rights. A model closer to northern European identity management systems comes to mind, or one that is already in use in South Africa. Such systems stand in contrast with Aadhaar, which seeks to side-step the “pesky political issue” of citizenship. This is perhaps the most serious and dangerous element of the mirage: Aadhaar only provides an “economic identity” (with rights limited to government hand-outs, and “voluntary” use for private services), which aims to facilitate economic transactions and private sector service delivery. The UIDAI, then, insists that Aadhaar has “nothing to do with the citizenship issue.”

But Aadhaar’s “citizenship-blindness” is make-believe. Enrollment into Aadhaar was selective in Assam state, for example, where the issuance of digital ID was linked to citizenship determinations. Suddenly, Aadhaar proved to be an exclusionary “citizenship ID” after all. Aadhaar has dangerously played into worrying trends, such as the Citizenship Amendment Act and widespread lack of proof of citizenship—all while proponents claim that it is a model of how to achieve “legal identity for all.”

Aadhaar proves to be a mirage that we see while traveling on “the road to hell,” which is paved with imaginary intentions and is leading to a deadly development destination. Its presentation as a “model” digital ID system should be urgently reconsidered.

July 14, 2022. Drs. Jaap van der Straaten, MBA, is an economist and identity management consultant. In 2016­–2017, he provided advisory services to the World Bank’s ID4D practice. He has published extensively on Elsevier’s SSRN and ResearchGate.

Sorting in Place of Solutions for Homeless Populations: How Federal Directives Prioritize Data Over Services

TECHNOLOGY & HUMAN RIGHTS

Sorting in Place of Solutions for Homeless Populations: How Federal Directives Prioritize Data Over Services

National data collection and service prioritization were supposed to make homeless services more equitable and efficient. Instead, they have created more risks and bureaucratic burdens for homeless individuals and homeless service organizations.

While serving as an AmeriCorps VISTA member supporting the IT and holistic defense teams at a California public defender, much of my time was spent navigating the data bureaucracy that now weighs down social service providers across the country. In particular, I helped social workers and other staff members use tools like the Vulnerability Index – Service Prioritization Decision Assistance Tool (VI-SPDAT) and a Homeless Management Information System (HMIS). While these tools were ostensibly designed to improve care for homeless and housing insecure people, all too often they did the opposite.

An HMIS is a localized information network and database used to collect client-level data and data on the provision of housing and services to homeless or at-risk persons. In 2011, Congress passed the HEARTH Act, mandating the use of HMIS by communities in order to receive federal funding. HMIS demands coordinated entry, a process by which certain types of data are cataloged and clients are ranked according to their perceived need. One of the most common tools for coordinated entry—and the one used by the social workers I worked with—is VI-SPDAT. VI-SPDAT is effectively a questionnaire which involves a battery of highly invasive questions which seek to determine the level of need of the homeless or housing insecure individual to whom it is administered.

These tools have been touted as game-changers, but while homelessness across the country, and especially in California, continued to decrease modestly in the years immediately following the enactment of the HEARTH act, it began to increase again in 2019 and sharply increased in 2020, even before the onset of the COVID-19 pandemic. This is not to suggest a causal link; indeed, the evidence suggests that factors such as rising housing costs and a worsening methamphetamine epidemic are at the heart of rising homelessness. But there is little evidence that intrusive tools like VI-SPDAT alleviate these problems.

Indeed, these tools have themselves been creating problems for homeless persons and social workers alike. There have been harsh criticisms from scholars like Virginia Eubanks about the accuracy and usefulness of VI-SPDAT. It has been found to produce unreliable and racially biased results. Rather than decreasing bias as it purports to do, VI-SPDAT has baked bias into its algorithms, providing a veneer of scientific objectivity for government officials to hide behind.

But, even if these tools were to be made more reliable and less biased,  they would nonetheless cause harm and stigmatization. Homeless individuals and social workers alike report finding the assessment dehumanizing and distressing. For homeless individuals, it can also feel deeply risky. Those who don’t score high enough on the assessment are often denied housing and assistance altogether. Those who score too high run the risk of involuntary institutionalization.

Meanwhile, these tools place significant burdens on social workers. To receive federal funding, organizations must provide not only an intense amount of highly intimate information about homeless persons and their life histories, but also a minute accounting of every interaction between the social worker and the client. One social worker would frequently work with clients from 9-5, go home to make dinner for her children, and then work into the wee hours of the night attempting to log all of her data requirements.

I once sat through a 45-minute video call with a veteran social worker who broke down into tears worried that the grant funding her position might be taken away if her record keeping was less than perfect, but the design of the HMIS made it virtually impossible to be completely honest. The system anticipated that four-hour client interactions could easily be broken down into distinct chunks—discussed x problem from 4:15 to 4:30, y problem from 4:30 to 4:45, and so on. Of course, anyone who has ever had a conversation with another human being, let alone a human being with mental disabilities or substance use problems, knows that interactions are rarely so tidy and linear.

While this data is claimed to be kept very secure, in reality, hundreds of people in dozens of organizations typically have access to any given HMIS. There are guidelines in place to protect the data, but there is minimal monitoring to ensure that these guidelines are being followed, and many users found them very difficult to follow while working from home during the pandemic. I heard multiple stories of police or prosecutors improperly accessing information from HMIS. Clients can request to have their information removed from the system, but the process for doing so is rarely made clear to them, nor is this process clear even for the social workers processing the data.

After years of criticism, OrgCode—the group which develops VI-SPDAT—announced in 2021 that it would no longer be pushing VI-SPDAT updates, and as of 2022 it is no longer providing support for the current iteration of VI-SPDAT. While this is a commendable move from OrgCode, stakeholders in homeless services must acknowledge the larger failures of HMIS and coordinate entry more generally. Many of the other tools used to perform coordinated entry have similar problems to VI-SPDAT, in part because coordinated entry in effect requires this intrusive data collection about highly personal issues to determine needs and rank clients accordingly. The problems are baked into the data requirements of coordinated entry itself.

The answer to this problem cannot be to completely do away with any classification tools for housing insecure individuals, because understanding the scope and demographics of homelessness is important in tackling it. But clearly a drastic overhaul of these systems is needed to make sure that they are efficient, noninvasive, and accurate. Above all, it is crucial to remember that tools for sorting homeless individuals are only useful to the extent that they ultimately provide better access to the services that actually alleviate homelessness, like affordable housing, mental health treatment, and addiction support. Demanding that beleaguered social service providers prioritize data collection over services, all while using intrusive, racially biased, and dehumanizing tools, will only worsen an intensifying crisis.

May 17, 2022. Batya Kemper, J.D. program, NYU School of Law.

Social rights disrupted: how should human rights organizations adapt to digital government?

TECHNOLOGY & HUMAN RIGHTS

Social rights disrupted: how should human rights organizations adapt to digital government?

As the digitalization of government is accelerating worldwide, human rights organizations who have not historically engaged with questions surrounding digital technologies are beginning to grapple with these issues. This challenges these organizations to adapt both their substantive focus and working methods while remaining true to their values and ideals.

On September 29, 2021, Katelyn Cioffi and I hosted the seventh event in the Transformer States conversation series, which focuses on the human rights implications of the emerging digital state. We interviewed Salima Namusobya, Executive Director of the Initiative for Social and Economic Rights (ISER) in Uganda, about how socioeconomic rights organizations are having to adapt to respond to issues arising from the digitalization of government. In this blog post, I outline parts of the conversation. The event recording, transcript, and additional readings can be found below.

Questions surrounding digital technologies are often seen as issues for “digital rights” organizations, which generally focus on a privileged set of human rights issues such as privacy, data protection, free speech online, or cybersecurity. But, as governments everywhere enthusiastically adopt digital technologies to “transform” their operations and services, these developments are starting to be confronted by actors who have not historically engaged with the consequences of digitalization.

Digital government as a new “core issue”

The Initiative for Social and Economic Rights (ISER) in Uganda is one such human rights organization. Its mission is to improve respect, recognition, and accountability for social and economic rights in Uganda, focusing on the right to health, education, and social protection. It had never worked on government digitalization until recently.

But, through its work on social protection schemes, ISER was confronted with the implications of Uganda’s national digital ID program. While monitoring the implementation of the Senior Citizens grant in which persons over 80 years old receive cash grants, ISER staff frequently encountered people who were clearly over 80 but were not receiving grants. This program had been linked to Uganda’s national identification scheme, which holds individuals’ biographic and biometric information in a centralized electronic database called the National Identity Register and issues unique IDs to enrolled individuals. Many older persons had struggled to obtain IDs because their fingerprints could not be captured. Many other older persons had obtained national IDs, but the wrong birthdates were entered into the ID Register. In one instance, a man’s birthdate was wrong by nine years. In each case, the Senior Citizens grant was not paid to eligible beneficiaries because of faulty or missing data within the National Identity Register. Witnessing these significant exclusions led  ISER to become  actively involved in research and advocacy surrounding the digital ID. They partnered with CHRGJ’s Digital Welfare State team and Ugandan digital rights NGO Unwanted Witness, and the collective work culminated in a joint report. This has now become a “core issue” for ISER.

Key challenges

While moving into this area of work, ISER has faced some challenges. First, digitalization is spreading quickly across various government services. From the introduction of online education despite significant numbers of people having no access to electricity or the internet, to the delivery of COVID-19 relief via mobile money when only 71% of Ugandans own a mobile phone, exclusions are arising across multiple government initiatives. As technology-driven approaches are being rapidly adopted and new avenues of potential harm are continually materializing, organizations can find it difficult to keep up.

The widespread nature of these developments mean that organizations are finding themselves making the same argument again and again to different parts of government. It is often proclaimed that digitized identity registers will enable integration and interoperability across government, and that introducing technologies into governance “overcomes bureaucratic legacies, verticality and silos.” But ministries in Uganda remain fragmented and are each separately linking their services to the national ID. ISER must go to different ministries whenever new initiatives are announced to explain, yet again, the significant level of exclusion that using the National Identity Register entails. While fragmentation was a pre-existing problem, the rapid proliferation of initiatives across government is leaving organizations “firefighting.”

Second, organizations face an uphill battle in convincing the government to slow down in their deployment of technology. Government officials often see enormous potential in technologies for cracking down on security threats and political dissent. Digital surveillance is proliferating in Uganda, and the national ID contributes to this agenda by enabling the government to identify individuals. Where such technologies are presented as combating terrorism, advocating against them is a challenge.

Third, powerful actors are advocating the benefits of government digitalization. International agencies such as the World Bank are providing encouragement and technical assistance and are praising governments’ digitalization efforts. Salima noted that governments take this seriously, and if publications from these organizations are “not balanced enough to bring out the exclusionary impact of the digitalization, it becomes a problem.” Civil society faces an enormous challenge in countering overly-positive reports from influential organizations.

Lessons for human rights organizations

In light of these challenges, several key lessons arise for human rights organizations who are not used to working on technology-related problems but who are witnessing harmful impacts from digital government.

One important lesson is that organizations will need to adopt new and different methods in dealing with challenges arising from the rapid spread of digitalization; they should use “every tool available to them.” ISER is an advocacy organization which only uses litigation as a last resort. But when the Ugandan Ministry of Health announced that national ID would be required to access COVID-19 vaccinations, “time was of the essence”, in Salima’s words. Together with Unwanted Witness, it immediately launched litigation seeking an injunction, arguing that this would exclude millions, and the policy was reversed.

ISER’s working methods have changed in other ways. ISER is not a service provision charity. But, in seeing countless people unable to access services because they were unable to enroll in the ID Register, ISER felt obliged to provide direct assistance. Staff compiled lists of people without ID, provided legal services, and helped individuals to navigate enrolment. Advocacy organizations may find themselves taking on such roles to assist those who are left behind in the transition to digital government.

Another key lesson is that organizations have much to gain from sharing their experiences with practitioners who are working in different national contexts. ISER has been comparing its experiences and sharing successful advocacy approaches with Kenyan and Indian counterparts and has found “important parallels.”

Last, organizations must engage in active monitoring and documentation to create an evidence base which can credibly show how digital initiatives are, in practice, affecting some of the most vulnerable. As Salima noted, “without evidence, you can make as much noise as you like,” but it will not lead to change. From taking videos and pictures, to interviewing and writing comprehensive reports, organizations should be working to ensure that affected communities’ experiences can be amplified and reflected to demonstrate the true impacts of government digitalization.

October 19, 2021. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Silencing and Stigmatizing the Disabled Through Social Media Monitoring

TECHNOLOGY & HUMAN RIGHTS

Silencing and Stigmatizing the Disabled Through Social Media Monitoring

In 2019, the United States’s Social Security program comprised 23% of the federal budget. Apart from retirement benefits, the Social Security program provides Supplemental Security Income (SSI) and Social Security Disability Insurance (SSDI), which are disability benefits for disabled individuals unable to work. A multimillion-dollar disability fraud case in 2014 provoked the Social Security Administration to evaluate their controls in place to identify and prevent disability fraud. The review found that social media played a ‘critical role’ in this fraud case, “as disability claimants were seen in photos on their personal accounts, riding on jet skis, performing physical stunts in karate studios, and driving motorcycles”. Although Social Security Disability fraud is rare, the Social Security Administration has since adopted social media monitoring tools which use social media posts as a factor in determining when disability fraud is being committed by an ineligible individual. Although human rights advocates have evaluated how such digitally enabled fraud detection tools violate privacy rights, few explore other human rights violations resulting from new digital tools employed by governments in the fight against benefit fraud.

To help fill this gap, this summer I conducted research to provide a voice to disabled individuals applying for and receiving Social Security disability benefits, whose experiences are largely invisible in society. From these interviews, it became clear that automated tools such as social media monitoring perpetuate the stigmatization of disabled people. Interviewees reported that, when aware of being monitored on social media, they felt compelled to modify their behavior to fit within the stigma associated with how disabled people should look and behave. These behavior modifications prevent disabled individuals from integrating into society and accessing services necessary to their survival.

Since the creation of social benefits, disabled people have been stigmatized in society, oftentimes being viewed as either incapable or unwilling to work. Those who work are perceived as incapable employees, while those who are unable to work are viewed as lazy. Social media monitoring is the product of that stigma as it relies on assumptions about how a disabled person should look and act. One individual I interviewed recounted that when they sought advice on the application process people told them, “You can never post anything on social media of you having fun ever. Don’t post pictures of you smiling, not until after you are approved and even then, you have to make sure you’re careful and keep it on private.” Being unable to smile or outwardly express happiness ties to family and professionals underestimating a disabled individual’s quality of life. This underestimation can lead to the assumption that “real” disabled people have a poor quality of life and are unable to be happy.

The social media monitoring tool’s methodology relies on potentially inaccurate data because social media does not give a comprehensive view into a person’s life. People typically create an exaggerated, positive lens of their lives on social media which glosses over more difficult elements. Schwartz and Halegoua describe this perception as “spatial self”, which refers to how individuals “document, archive, and display their experience and/or mobility within space and place in order to represent or perform aspects of their identity to others.” Scholars on social media activity have published numerous studies on how people use images, videos, status updates, and comments on social media to present themselves in a very curated way.
Contrary to the positive spin most individuals put on their social media, disabled individuals actually feel compelled to “curate” their social media activity in a way that presents them as weak and incapable to fit the narrative of who deserves disability benefits. For them, receiving disability benefits is crucial to survive and pay for basic necessities.

The individuals I interviewed shared how such surveillance tools not only modify their behavior but also prevent them from exercising a whole range of human rights through social media. These rights are essential for all people but particularly for disabled individuals because the silencing of their voices strips away their ability to advocate for their community and form social relationships. Although social media offers avenues for socialization and political engagement to all social media users, social media significantly opens up opportunities to disabled individuals. Participants expressed that without social media they would be unable to form these relationships offline where accommodations for their disability do not exist. Disabled individuals greatly value sharing on social media as the medium enables them to highlight aspects of their identity beyond being disabled. An individual expressed to me how important social media is for socializing particularly during the Covid-19 pandemic, “I use Facebook mostly as a method of socializing especially right now with the pandemic going on, and occasionally political engagement.”Participants expressed that they feel like they need to modify their behavior on social media, with one participant saying, “I don’t think anybody feels good being monitored all the time and that’s essentially what I feel like now post-disability. I can’t have fun or it will be taken away.” This is fundamentally a human rights issue.

These human rights issues include equality in social life, and the ability to participate in the broader community online. Long-term these inequalities can harm their human rights as their voices and experiences are not taken into account by people outside of the disability community. In many reports on the disability community, the majority consensus rests on the fact that the exclusion of disabled people and their input undermines the well-being of disabled individuals. Ignoring or silencing the voices of disabled people prevents them from using their voices to advocate for themselves and participate in decisions involving their lives, making them vulnerable to disability discrimination, exclusion, violence, poverty and untreated health problems. For example, a participant I interviewed shared how the process reinforces disability discrimination through behavior modification:

There was no room for me to focus on anything I could still do. Because the disability process is exactly that, it’s finding out what you can’t do. You have to prove that your life sucks. That adds to the disability shame and stigma too. So anyways, dehumanizing.

In addition to the social and economic rights mentioned above, social media monitoring also impacts the enjoyment of civil and political rights for disabled individuals applying for and receiving Social Security disability benefits. Richards and Hartzog write, “Trust within information relationships is critical for free expression and a precursor to many kinds of political engagement.” They highlight how the Internet and social media have been used both for access to political information and political engagement, which has a large impact on politics in general. Participants revealed to me that they used social media as a primary method for engaging in activism and contributing to political thought. The individuals I interviewed shared that they use social media to engage with political representatives on disability-related legislation and to bring awareness of disability-related issues to their political representatives. Social media monitoring restricting freedom of expression can remove disabled individuals from participating in the political sphere and exercising other civil and political rights.

I am a disabled person who recently qualified for disability benefits, so I personally understand this pressure to prove I deserve the benefits and accommodations allocated to people who are “actually” disabled. Social media monitoring perpetuates this harmful narrative that disabled individuals applying for and receiving disability benefits need to prove their eligibility by modifying their behavior to fit disability stereotypes. This behavior modification restricts our access to form meaningful relationships, push against disability stigma and advocate for ourselves through political engagement. As social media monitoring pushes us out of social media platforms, our voices are silenced and this exclusion leads to further social inequalities. As disability rights activism continues to transform in the United States, I hope that this research will inspire future studies into disability rights, experiences applying for and receiving SSI and SSDI, and how they may intersect with human rights beyond privacy rights.

October 29, 2020. Sarah Tucker, Columbia University Human Rights graduate program. She uses her experiences as a disabled woman working in tech to advocate for the Disability community.