User-friendly Digital Government? A Recap of Our Conversation About Universal Credit in the United Kingdom

TECHNOLOGY & HUMAN RIGHTS

User-friendly Digital Government? A Recap of Our Conversation About Universal Credit in the United Kingdom

On September 30, 2020, the  Digital Welfare State and Human Rights Project hosted the first in its series of virtual conversations entitled “Transformer States: A Conversation Series on Digital Government and Human Rights” exploring the digital transformation of governments around the world. In this first iteration of the series, Christiaan van Veen and Victoria Adelmant interviewed Richard Pope, part of the founding team at the UK Government Digital Service and author of Universal Credit: Digital Welfare. In interviewing a technologist who worked with policy and delivery teams across the UK government to redesign government services, the event sought to explore the promise and realities of digitalized benefits. 

Universal Credit (UC), the main working-age benefit for the UK population, represents at once a major political reform and an ambitious digitization project. UC is a “digital by default” benefit in that claims are filed and managed via an online account, and calculations of recipients’ entitlements are also reliant on large-scale automation within government. The Department for Work and Pensions (DWP), the department responsible for welfare in the UK, repurposed the taxation office’s Real-Time Information (RTI) system, which already collected information about employees’ earnings for the purposes of taxation, in order to feed this data about wages into an automated calculation of individual benefit levels. The amount a recipient receives each month from UC is calculated on the basis of this “real-time feed” of information about her earnings as well as on the basis of a long list of data points about her circumstances, including how many children she has, her health situation and her housing. UC is therefore ‘dynamic,’ as the monthly payment that recipients receive fluctuates. Readers can find a more comprehensive explanation of how UC works in Richard’s report.

One “promise” surrounding UC was that it would make interaction with the British welfare system more user-friendly. The 2010 White Paper launching the reforms noted that it would ‘cut through the complexity of the existing system’ through introducing online systems which would be “simpler and easier to understand” and “intuitive.” Richard explained that the design of UC was influenced by broader developments surrounding the government’s digital transformation agenda, whereby “user-centered design” and “agile development” became the norm across government in the design of new digital services. This approach seeks to place the needs of users first and to design around those needs. It also favors an “agile,” iterative way of working rather than designing an entire system upfront (the “waterfall” approach).

Richard explained that DWP designs the UC software itself and releases updates to the software every two weeks: “They will do prototyping, they will do user research based on that prototyping, they will then deploy those changes, and they will then write a report to check that it had the desired outcome,” he said. Through this iterative, agile approach, government has more flexibility and is better able to respond to “unknowns.” Once such ‘unknown’ is the Covid-19 pandemic, and as the UK “locked down” in March, almost a million new claims for UC were successfully processed in the space of just two weeks. Not only would the old, pre-UC system have been unlikely to have been able to meet this surge, this has also compared very favorably with the failures seen in some US states—some New Yorkers, for example, were required to fax their applications for unemployment benefit.

The conversation then turned to the reality of UC from the perspective of recipients. For example, half of claimants were unable to make their claim online without help, and DWP was recently required by a tribunal to release figures which show that hundreds of thousands of claims are abandoned each year. The ‘digital first’ principle as applied to UC, in effect requiring all applicants to claim online and offering inadequate alternatives, has been particularly harmful in light of the UK’s ‘digital divide.’ Richard underlined that there is an information problem here – why are those applications being abandoned? We cannot be certain that the sole cause is a lack of digital skills. Perhaps people are put off by the large quantity of information about their lives they are required to enter into the digital system, or people get a job before completing the application, or they realize how little payment they will receive, or that they will have to wait around five weeks to receive any payment.

But had the UK government not been overly optimistic about future UC users’ access and ability to use digital systems? For example, the 2012 DWP Digital Strategy stated that “most of our customers and claimants are already online and more are moving online all the time” while only half of all adults with an annual household income between £6,000-£10,000 have an internet connection either via broadband or smartphone. Richard agreed that the government had been over-optimistic, but pointed again to the fact that we do not know why users abandon applications or struggle with the claim, such that it is “difficult to unpick which elements of those problems are down to the technology, which elements are down to the complexity of the policy, and which elements are down to a lack of digital skills.”

This question of attributing problems to policy rather than to the technology was a crucial theme throughout the conversation. Organizations such as the Child Poverty Action Group have pointed to instances in which the technology itself causes problems, identifying ways in which the UC interface is not user-friendly, for example. CPAG was commended in the discussion for having “started to care about design” and proposing specific design changes in its reports. Richard noted that certain elements which were not incorporated into the digital design of UC, and elements which were not automated at all, highlight choices which have been made. For example, the system does not display information about additional entitlements, such as transport passes or free prescriptions and dental care, for which UC applicants may be eligible. The fact that the technological design of the system did not feature information about these entitlements demonstrates the importance and power of design choices, but it is unclear whether such design choices were the result of political decisions, or simply omissions by technologists.

Richard noted that some of the political aims towards which UC is directed are in tension with the attempt to use technology to reduce administrative burdens on claimants and to make the welfare state more user-friendly. Though the ‘design culture’ among civil servants genuinely seeks to make things easier for the public, political priorities push in different directions. UC is “hyper means-tested”: it demands a huge amount of data points to calculate a claimant’s entitlement, and it seeks to reward or punish certain behaviors, such as rewarding two-parent families. If policymakers want a system that demands this level of control and sorting of claimants, then the system will place additional administrative burdens on applicants as they have more paperwork to find, they have to contact their landlord to get a signed copy of their lease, and so forth. Wanting this level of means-testing will result in a complex policy and “there is only so much a designer can do to design away that complexity”, as Richard underlined. That said, Richard also argued that part of the problem here is that government has treated policy and the delivery of services as separate. Design and delivery teams hold “immense power” and designers’ choices will be “increasingly powerful as we digitize more important, high-stakes public services.” He noted, “increasingly, policy and delivery are the same thing.”

Richard therefore promotes “government as a platform.” He highlighted the need for a rethink about how the government organizes its work and argued that government should prioritize shared reusable components and definitive data sources. It should seek to break down data silos between departments and have information fed to government directly from various organizations or companies, rather than asking individuals to fill out endless forms. If such an approach were adopted, Richard claimed, digitalization could hugely reduce the burdens on individuals. But, should we go in that direction, it is vital that government become much more transparent around its digital services. There is, as ever, an increasing information asymmetry between government and individuals, and this transparency will be especially important as services become ever-more personalized. Without more transparency about technological design within government, we risk losing a shared experience and shared understanding of how public services work and, ultimately, the capacity to hold government accountable.

October 14, 2020. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

The Aadhaar Mirage: A Second Look at the World Bank’s “Model” for Digital ID Systems

TECHNOLOGY & HUMAN RIGHTS

The Aadhaar Mirage: A Second Look at the World Bank’s “Model” for Digital ID Systems 

Drawing inspiration from India’s Aadhaar system, the World Bank is promoting a dangerous digital ID model in the name of providing “a legal identity for all.” But rather than providing a model, Aadhaar is merely a mirage—an illusion of inclusiveness, accuracy, and universal identity.

Last month saw the publication of a report on the World Bank’s ill-conceived approach to digital ID, described as “essential reading for all concerned about human rights and development” by former UN Special Rapporteur on Extreme Poverty and Human Rights Philip Alston. As the press release summarizes:

Governments around the world have been investing heavily in digital identification systems, often with biometric components (digital ID). The rapid proliferation of such systems is driven by a new development consensus, packaged and promoted by key global actors like the World Bank, but also by governments, foundations, vendors, and consulting firms. This new ‘manufactured consensus’ holds that digital ID can contribute to inclusive and sustainable development—and is even a prerequisite for the realization of human rights.”

The report argues that India’s digital identification system has been central to the formation and promotion of this consensus. This has also been increasingly clear to me in my experience as an economist and identity management consultant who has provided advisory services to the World Bank. For the World Bankand particularly its Identification for Development (ID4D) cross-sectoral practicethe Indian system, named Aadhaar, has become the singular answer to development and a key source of inspiration. This continues irrespective of the body of evidence which shows how poorly a “fit” the Aadhaar system is for identity management in India, and even more so elsewhere. Aadhaar represents a mirage: it is not evidencing the universality, inclusiveness, unprecedented enrollment speed, meaningful legal identity, nor accuracy that it is claimed to represent.

The World Bank’s own data on the completeness of ID systems displays the “20/80-rule”: the overwhelming odds are that digital ID systems not building on a functional civil registration system (in which births, deaths, marriages and so forth are recorded) will exclude 20% or more of (mostly vulnerable) people, or they will take at least 80 years to cover all. Many developing countries often abandon underperforming ID-systems obtained at great cost, only to launch new and even more sophisticated systems. Instead of using existing service infrastructure for civil registration, new digital ID systems are rolled out through a quick fix “mobile campaign,” held once or twice, with mobile enrollment kits and temporary enrollment staff. But this invariably leaves a coverage and service void behind.

But what about Aadhaar, then? Hasn’t Aadhaar enrolled almost all of the Indian population (1.29 billion by March 2021, out of 1.39 billion), in just a decade (from September 2010­), at minimal cost (USD $1.60/enrollment)? If one believes the data from the Unique Identification Authority of India (UIDAI), then yes. But independent data are unavailable; UIDAI controls the message—even the Comptroller and Auditor General of India (CAG) had to use UIDAI data for its first ever audit of Aadhaar. Still, CAG found that UIDAI’s operational and financial management have been utterly deficient. Claims about Aadhaar’s impressive coverage and universality might, then, be questionable. Neither is the database accurate: the Aadhaar system has no way of weeding out dead enrollees (about 80 million in 10 years) or people leaving India (including Indian citizens). CAG also found UIDAI’s digital archiving and its collection and storage of the physical documents that back up enrollments to be inadequate.

Furthermore, claims about the uniqueness guaranteed by biometric technologies within Aadhaar are also illusory. There is no uniqueness for the approximately 25 million children under five years old enrolled in the database. Multiple Aadhaars were issued to the same persons, while different Aadhaar numbers associated with the same biometric data were issued to multiple people. Fingerprint authentication success for 2020-21 was only (an unverifiable) 74-76%. This may well be the canary in the coalmine, indicating exaggerated coverage claims for Aadhaar. Indeed, a Privacy International study explains the very statistical impossibility of a unique biometric profile in a population of 1.39 billion people. Rather, each Indian person has an average of 17,500 indistinguishable biometric “doubles.”

These claims about the benefits of biometrics have far-reaching implications as Aadhaar is linked to other areas of governance. A new law provides for the use of Aadhaar to verify the electoral roll. Weeding out “ghost entries” when the uniqueness and de-duplicated nature of the Aadhaar database is disproved is a doomed exercise, and represents another potential threat to India’s democracy.

Aadhaar’s “big numbers” are a mirage too. Proponents claim that over a billion were newly enrolled at record speed at low cost. But this is not as unprecedented as is suggested. For elections in India, 900 million voters are registered or verified every five years—which tops Aadhaar’s enrollment accomplishment. And India’s bureaucracy has long provided multiple forms of documentation; for proof of identity, date of birth, and address, enrollees can choose from a menu of no less than 106 valid documents. Less than 3 in 10,000 enrollees lacked valid ID prior to Aadhaar enrollment by 2016. The Aadhaar system is a duplication which simply adds on biometrics—which, as we saw, are not the holy grail they are claimed to be. To suggest that other countries, which do not have this multitude of breeder documents and existing enrollment capacities, can copy the Aadhaar approach and obtain widespread coverage, is an illusion.

In respect of claims that Aadhaar brings down costs and increases efficiencies: these costs are applicable only in India. I have found that digital ID systems in many African countries cost 5 to 10 times more per capita than India’s ID system. The high failure rates of ID-systems in many developing countries add to the unbearable costs for poorer countries and their more vulnerable people.

This cries out for a better identity management model—one that is centered around citizenship, with civil registration as the foundation, which seeks to guarantee rights. A model closer to northern European identity management systems comes to mind, or one that is already in use in South Africa. Such systems stand in contrast with Aadhaar, which seeks to side-step the “pesky political issue” of citizenship. This is perhaps the most serious and dangerous element of the mirage: Aadhaar only provides an “economic identity” (with rights limited to government hand-outs, and “voluntary” use for private services), which aims to facilitate economic transactions and private sector service delivery. The UIDAI, then, insists that Aadhaar has “nothing to do with the citizenship issue.”

But Aadhaar’s “citizenship-blindness” is make-believe. Enrollment into Aadhaar was selective in Assam state, for example, where the issuance of digital ID was linked to citizenship determinations. Suddenly, Aadhaar proved to be an exclusionary “citizenship ID” after all. Aadhaar has dangerously played into worrying trends, such as the Citizenship Amendment Act and widespread lack of proof of citizenship—all while proponents claim that it is a model of how to achieve “legal identity for all.”

Aadhaar proves to be a mirage that we see while traveling on “the road to hell,” which is paved with imaginary intentions and is leading to a deadly development destination. Its presentation as a “model” digital ID system should be urgently reconsidered.

July 14, 2022. Drs. Jaap van der Straaten, MBA, is an economist and identity management consultant. In 2016­–2017, he provided advisory services to the World Bank’s ID4D practice. He has published extensively on Elsevier’s SSRN and ResearchGate.

Sorting in Place of Solutions for Homeless Populations: How Federal Directives Prioritize Data Over Services

TECHNOLOGY & HUMAN RIGHTS

Sorting in Place of Solutions for Homeless Populations: How Federal Directives Prioritize Data Over Services

National data collection and service prioritization were supposed to make homeless services more equitable and efficient. Instead, they have created more risks and bureaucratic burdens for homeless individuals and homeless service organizations.

While serving as an AmeriCorps VISTA member supporting the IT and holistic defense teams at a California public defender, much of my time was spent navigating the data bureaucracy that now weighs down social service providers across the country. In particular, I helped social workers and other staff members use tools like the Vulnerability Index – Service Prioritization Decision Assistance Tool (VI-SPDAT) and a Homeless Management Information System (HMIS). While these tools were ostensibly designed to improve care for homeless and housing insecure people, all too often they did the opposite.

An HMIS is a localized information network and database used to collect client-level data and data on the provision of housing and services to homeless or at-risk persons. In 2011, Congress passed the HEARTH Act, mandating the use of HMIS by communities in order to receive federal funding. HMIS demands coordinated entry, a process by which certain types of data are cataloged and clients are ranked according to their perceived need. One of the most common tools for coordinated entry—and the one used by the social workers I worked with—is VI-SPDAT. VI-SPDAT is effectively a questionnaire which involves a battery of highly invasive questions which seek to determine the level of need of the homeless or housing insecure individual to whom it is administered.

These tools have been touted as game-changers, but while homelessness across the country, and especially in California, continued to decrease modestly in the years immediately following the enactment of the HEARTH act, it began to increase again in 2019 and sharply increased in 2020, even before the onset of the COVID-19 pandemic. This is not to suggest a causal link; indeed, the evidence suggests that factors such as rising housing costs and a worsening methamphetamine epidemic are at the heart of rising homelessness. But there is little evidence that intrusive tools like VI-SPDAT alleviate these problems.

Indeed, these tools have themselves been creating problems for homeless persons and social workers alike. There have been harsh criticisms from scholars like Virginia Eubanks about the accuracy and usefulness of VI-SPDAT. It has been found to produce unreliable and racially biased results. Rather than decreasing bias as it purports to do, VI-SPDAT has baked bias into its algorithms, providing a veneer of scientific objectivity for government officials to hide behind.

But, even if these tools were to be made more reliable and less biased,  they would nonetheless cause harm and stigmatization. Homeless individuals and social workers alike report finding the assessment dehumanizing and distressing. For homeless individuals, it can also feel deeply risky. Those who don’t score high enough on the assessment are often denied housing and assistance altogether. Those who score too high run the risk of involuntary institutionalization.

Meanwhile, these tools place significant burdens on social workers. To receive federal funding, organizations must provide not only an intense amount of highly intimate information about homeless persons and their life histories, but also a minute accounting of every interaction between the social worker and the client. One social worker would frequently work with clients from 9-5, go home to make dinner for her children, and then work into the wee hours of the night attempting to log all of her data requirements.

I once sat through a 45-minute video call with a veteran social worker who broke down into tears worried that the grant funding her position might be taken away if her record keeping was less than perfect, but the design of the HMIS made it virtually impossible to be completely honest. The system anticipated that four-hour client interactions could easily be broken down into distinct chunks—discussed x problem from 4:15 to 4:30, y problem from 4:30 to 4:45, and so on. Of course, anyone who has ever had a conversation with another human being, let alone a human being with mental disabilities or substance use problems, knows that interactions are rarely so tidy and linear.

While this data is claimed to be kept very secure, in reality, hundreds of people in dozens of organizations typically have access to any given HMIS. There are guidelines in place to protect the data, but there is minimal monitoring to ensure that these guidelines are being followed, and many users found them very difficult to follow while working from home during the pandemic. I heard multiple stories of police or prosecutors improperly accessing information from HMIS. Clients can request to have their information removed from the system, but the process for doing so is rarely made clear to them, nor is this process clear even for the social workers processing the data.

After years of criticism, OrgCode—the group which develops VI-SPDAT—announced in 2021 that it would no longer be pushing VI-SPDAT updates, and as of 2022 it is no longer providing support for the current iteration of VI-SPDAT. While this is a commendable move from OrgCode, stakeholders in homeless services must acknowledge the larger failures of HMIS and coordinate entry more generally. Many of the other tools used to perform coordinated entry have similar problems to VI-SPDAT, in part because coordinated entry in effect requires this intrusive data collection about highly personal issues to determine needs and rank clients accordingly. The problems are baked into the data requirements of coordinated entry itself.

The answer to this problem cannot be to completely do away with any classification tools for housing insecure individuals, because understanding the scope and demographics of homelessness is important in tackling it. But clearly a drastic overhaul of these systems is needed to make sure that they are efficient, noninvasive, and accurate. Above all, it is crucial to remember that tools for sorting homeless individuals are only useful to the extent that they ultimately provide better access to the services that actually alleviate homelessness, like affordable housing, mental health treatment, and addiction support. Demanding that beleaguered social service providers prioritize data collection over services, all while using intrusive, racially biased, and dehumanizing tools, will only worsen an intensifying crisis.

May 17, 2022. Batya Kemper, J.D. program, NYU School of Law.

Social rights disrupted: how should human rights organizations adapt to digital government?

TECHNOLOGY & HUMAN RIGHTS

Social rights disrupted: how should human rights organizations adapt to digital government?

As the digitalization of government is accelerating worldwide, human rights organizations who have not historically engaged with questions surrounding digital technologies are beginning to grapple with these issues. This challenges these organizations to adapt both their substantive focus and working methods while remaining true to their values and ideals.

On September 29, 2021, Katelyn Cioffi and I hosted the seventh event in the Transformer States conversation series, which focuses on the human rights implications of the emerging digital state. We interviewed Salima Namusobya, Executive Director of the Initiative for Social and Economic Rights (ISER) in Uganda, about how socioeconomic rights organizations are having to adapt to respond to issues arising from the digitalization of government. In this blog post, I outline parts of the conversation. The event recording, transcript, and additional readings can be found below.

Questions surrounding digital technologies are often seen as issues for “digital rights” organizations, which generally focus on a privileged set of human rights issues such as privacy, data protection, free speech online, or cybersecurity. But, as governments everywhere enthusiastically adopt digital technologies to “transform” their operations and services, these developments are starting to be confronted by actors who have not historically engaged with the consequences of digitalization.

Digital government as a new “core issue”

The Initiative for Social and Economic Rights (ISER) in Uganda is one such human rights organization. Its mission is to improve respect, recognition, and accountability for social and economic rights in Uganda, focusing on the right to health, education, and social protection. It had never worked on government digitalization until recently.

But, through its work on social protection schemes, ISER was confronted with the implications of Uganda’s national digital ID program. While monitoring the implementation of the Senior Citizens grant in which persons over 80 years old receive cash grants, ISER staff frequently encountered people who were clearly over 80 but were not receiving grants. This program had been linked to Uganda’s national identification scheme, which holds individuals’ biographic and biometric information in a centralized electronic database called the National Identity Register and issues unique IDs to enrolled individuals. Many older persons had struggled to obtain IDs because their fingerprints could not be captured. Many other older persons had obtained national IDs, but the wrong birthdates were entered into the ID Register. In one instance, a man’s birthdate was wrong by nine years. In each case, the Senior Citizens grant was not paid to eligible beneficiaries because of faulty or missing data within the National Identity Register. Witnessing these significant exclusions led  ISER to become  actively involved in research and advocacy surrounding the digital ID. They partnered with CHRGJ’s Digital Welfare State team and Ugandan digital rights NGO Unwanted Witness, and the collective work culminated in a joint report. This has now become a “core issue” for ISER.

Key challenges

While moving into this area of work, ISER has faced some challenges. First, digitalization is spreading quickly across various government services. From the introduction of online education despite significant numbers of people having no access to electricity or the internet, to the delivery of COVID-19 relief via mobile money when only 71% of Ugandans own a mobile phone, exclusions are arising across multiple government initiatives. As technology-driven approaches are being rapidly adopted and new avenues of potential harm are continually materializing, organizations can find it difficult to keep up.

The widespread nature of these developments mean that organizations are finding themselves making the same argument again and again to different parts of government. It is often proclaimed that digitized identity registers will enable integration and interoperability across government, and that introducing technologies into governance “overcomes bureaucratic legacies, verticality and silos.” But ministries in Uganda remain fragmented and are each separately linking their services to the national ID. ISER must go to different ministries whenever new initiatives are announced to explain, yet again, the significant level of exclusion that using the National Identity Register entails. While fragmentation was a pre-existing problem, the rapid proliferation of initiatives across government is leaving organizations “firefighting.”

Second, organizations face an uphill battle in convincing the government to slow down in their deployment of technology. Government officials often see enormous potential in technologies for cracking down on security threats and political dissent. Digital surveillance is proliferating in Uganda, and the national ID contributes to this agenda by enabling the government to identify individuals. Where such technologies are presented as combating terrorism, advocating against them is a challenge.

Third, powerful actors are advocating the benefits of government digitalization. International agencies such as the World Bank are providing encouragement and technical assistance and are praising governments’ digitalization efforts. Salima noted that governments take this seriously, and if publications from these organizations are “not balanced enough to bring out the exclusionary impact of the digitalization, it becomes a problem.” Civil society faces an enormous challenge in countering overly-positive reports from influential organizations.

Lessons for human rights organizations

In light of these challenges, several key lessons arise for human rights organizations who are not used to working on technology-related problems but who are witnessing harmful impacts from digital government.

One important lesson is that organizations will need to adopt new and different methods in dealing with challenges arising from the rapid spread of digitalization; they should use “every tool available to them.” ISER is an advocacy organization which only uses litigation as a last resort. But when the Ugandan Ministry of Health announced that national ID would be required to access COVID-19 vaccinations, “time was of the essence”, in Salima’s words. Together with Unwanted Witness, it immediately launched litigation seeking an injunction, arguing that this would exclude millions, and the policy was reversed.

ISER’s working methods have changed in other ways. ISER is not a service provision charity. But, in seeing countless people unable to access services because they were unable to enroll in the ID Register, ISER felt obliged to provide direct assistance. Staff compiled lists of people without ID, provided legal services, and helped individuals to navigate enrolment. Advocacy organizations may find themselves taking on such roles to assist those who are left behind in the transition to digital government.

Another key lesson is that organizations have much to gain from sharing their experiences with practitioners who are working in different national contexts. ISER has been comparing its experiences and sharing successful advocacy approaches with Kenyan and Indian counterparts and has found “important parallels.”

Last, organizations must engage in active monitoring and documentation to create an evidence base which can credibly show how digital initiatives are, in practice, affecting some of the most vulnerable. As Salima noted, “without evidence, you can make as much noise as you like,” but it will not lead to change. From taking videos and pictures, to interviewing and writing comprehensive reports, organizations should be working to ensure that affected communities’ experiences can be amplified and reflected to demonstrate the true impacts of government digitalization.

October 19, 2021. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Singapore’s “smart city” initiative: one step further in the surveillance, regulation and disciplining of those at the margins

TECHNOLOGY & HUMAN RIGHTS

Singapore’s “smart city” initiative: one step further in the surveillance, regulation and disciplining of those at the margins

Singapore’s smart city initiative creates an interconnected web of digital infrastructures which promises citizens safety, convenience, and efficiency. But the smart city is experienced differently by individuals at the margins, particularly migrant workers, who are experimented on at the forefront of technological innovation.

On February 23, 2022, we hosted the tenth event of the Transformer States Series on Digital Government and Human Rights, titled “Surveillance of the Poor in Singapore: Poverty in ‘Smart City’.” Christiaan van Veen and Victoria Adelmant spoke with Dr. Monamie Bhadra Haines about the deployment of surveillance technologies as part of Singapore’s “smart city” initiative. This blog outlines the key themes discussed during the conversation.

The smart city in the context of institutionalized racial hierarchy

Singapore has consistently been hailed as the world’s leading smart city. For a decade, the city-state has been covering its territory with ubiquitous sensors and integrated digital infrastructures with the aim, in the government’s words, of collecting information on “everyone, everything, everywhere, all the time.” But these smart city technologies are layered on top of pre-existing structures and inequalities, which mediate how these innovations are experienced.

One such structure is an explicit racial hierarchy. As an island nation with a long history of multi-ethnicity and migration, Singapore has witnessed significant migration from Southern China, the Malay Peninsula, India, and Bangladesh. Borrowing from the British model of race-based regulation, this multi-ethnicity is governed by the post-colonial state through the explicit adoption of four racial categories – Chinese, Malay, Indian and Others (or “CMIO” for short) – which are institutionalized within immigration policies, housing, education and employment. As a result, while migrant workers from South and Southeast Asia are the backbone of Singapore’s blue-collar labor market, they occupy the bottom tier of the racial hierarchy; are subject to stark precarity; and have become the “objects” of extensive surveillance by the state.

The promise of the smart city

Singapore’s smart city initiative is “sold” to the public through narratives of economic opportunities and job creation in the knowledge economy, improving environmental sustainability, and increasing efficiency and convenience. Through collecting and inter-connecting all kinds of “mundane” data – such as electricity patterns, data from increasingly-intrusive IoT products, and geo-location and mobility data – into centralized databases, smart cities are said to provide more safety and convenience. Singapore’s hyper-modern technologically-advanced society promises efficient and seamless public services, and the constant technology-driven surveillance and the loss of a few civil liberties are viewed by many as a small price to pay for such efficiency.

Further, the collection of large quantities of data from individuals is promised to enable citizens to be better connected with the government; while governments’ decisions, in turn, will be based upon the purportedly objective data from sensors and devices, thereby freeing decision-making from human fallibility and rendering it more neutral.

The realities: disparate impacts of smart city surveillance on migrant workers

However, smart cities are not merely economic or technological endeavors, but techno-social assemblages that create and impact different publics differently. As Monamie noted, specific imaginations and imagery of Singapore as a hyper-modern, interconnected, and efficient smart city can obscure certain types of racialized physical labor, such as the domestic labor of female Southeast-Asian migrant workers.

Migrant workers are uniquely impacted by increasing digitalization and datafication in Singapore. For years, these workers have been housed in dormitories with occupancy often exceeding capacity, located in the literal “margins” or outskirts of the city: migrant workers have long been physically kept separate from the rest of Singapore’s population within these dormitory complexes. They are stereotyped as violent or frequently inebriated, and the dormitories have for years been surveilled through digital technologies including security cameras, biometric sensors, and data from social media and transport services.

The pandemic highlighted and intensified the disproportionate surveillance of migrant workers within Singapore. Layered on top of the existing technological surveillance of migrants’ dormitories, a surveillance assemblage for COVID-19 contact tracing was created. Measures in the name of public health were deployed to carefully surveil these workers’ bodies and movements. Migrant workers became “objects” of technological experimentation as they were required to use a multitude of new mobile-based apps that integrated immigration data and work permit data with health data (such as body temperature and oximeter readings) and Covid-19 contact tracing data. The permissions required by these apps were also quite broad – including access to Bluetooth services and location data. All the data was stored in a centralized database.

Even though surveillant contact-tracing technologies were later rolled out across Singapore and normalized around the world, the important point here is that these systems were deployed exclusively on migrant workers first. Some apps, Monamie pointed out, were indeed only required by migrant workers, while citizens did not have to use them. This use of interconnected networks of surveillance technologies thus highlights the selective experimentation that underpins smart city initiatives. While smart city initiatives are, by their nature, premised on large-scale surveillance, we often see that policies, apps, and technologies are tried on individuals and communities with the least power first, before spilling out to the rest of the population. In Singapore, the objects of such experimentation are migrant workers who occupy “exceptional spaces” – of being needed to ensure the existence of certain labor markets, but also of needing to be disciplined and regulated. These technological initiatives, in subjecting specific groups at the margins to more surveillance than the rest of the population and requiring them to use more tech-based tools than others, serve to exacerbate the “othering” and isolation of migrant workers.

Forging eddies of resistance

While Monamie noted that “activism” is “still considered a dirty word in Singapore,” there have been some localized efforts to challenge some of the technologies within the smart city, in part due to the intensification of surveillance spurred by the pandemic. These efforts, and a rapidly-growing recognition of the disproportionate targeting and disparate impacts of such technologies, indicate that the smart city is also a site of contestation with growing resistance to its tech-based tools.

March 18, 2022. Ramya Chandrasekhar, LLM program at NYU School of Law whose research interests relate to data governance, critical infrastructure studies, and critical theory. She previously worked with technology policy organizations and at a reputed law firm in India.

Silencing and Stigmatizing the Disabled Through Social Media Monitoring

TECHNOLOGY & HUMAN RIGHTS

Silencing and Stigmatizing the Disabled Through Social Media Monitoring

In 2019, the United States’s Social Security program comprised 23% of the federal budget. Apart from retirement benefits, the Social Security program provides Supplemental Security Income (SSI) and Social Security Disability Insurance (SSDI), which are disability benefits for disabled individuals unable to work. A multimillion-dollar disability fraud case in 2014 provoked the Social Security Administration to evaluate their controls in place to identify and prevent disability fraud. The review found that social media played a ‘critical role’ in this fraud case, “as disability claimants were seen in photos on their personal accounts, riding on jet skis, performing physical stunts in karate studios, and driving motorcycles”. Although Social Security Disability fraud is rare, the Social Security Administration has since adopted social media monitoring tools which use social media posts as a factor in determining when disability fraud is being committed by an ineligible individual. Although human rights advocates have evaluated how such digitally enabled fraud detection tools violate privacy rights, few explore other human rights violations resulting from new digital tools employed by governments in the fight against benefit fraud.

To help fill this gap, this summer I conducted research to provide a voice to disabled individuals applying for and receiving Social Security disability benefits, whose experiences are largely invisible in society. From these interviews, it became clear that automated tools such as social media monitoring perpetuate the stigmatization of disabled people. Interviewees reported that, when aware of being monitored on social media, they felt compelled to modify their behavior to fit within the stigma associated with how disabled people should look and behave. These behavior modifications prevent disabled individuals from integrating into society and accessing services necessary to their survival.

Since the creation of social benefits, disabled people have been stigmatized in society, oftentimes being viewed as either incapable or unwilling to work. Those who work are perceived as incapable employees, while those who are unable to work are viewed as lazy. Social media monitoring is the product of that stigma as it relies on assumptions about how a disabled person should look and act. One individual I interviewed recounted that when they sought advice on the application process people told them, “You can never post anything on social media of you having fun ever. Don’t post pictures of you smiling, not until after you are approved and even then, you have to make sure you’re careful and keep it on private.” Being unable to smile or outwardly express happiness ties to family and professionals underestimating a disabled individual’s quality of life. This underestimation can lead to the assumption that “real” disabled people have a poor quality of life and are unable to be happy.

The social media monitoring tool’s methodology relies on potentially inaccurate data because social media does not give a comprehensive view into a person’s life. People typically create an exaggerated, positive lens of their lives on social media which glosses over more difficult elements. Schwartz and Halegoua describe this perception as “spatial self”, which refers to how individuals “document, archive, and display their experience and/or mobility within space and place in order to represent or perform aspects of their identity to others.” Scholars on social media activity have published numerous studies on how people use images, videos, status updates, and comments on social media to present themselves in a very curated way.
Contrary to the positive spin most individuals put on their social media, disabled individuals actually feel compelled to “curate” their social media activity in a way that presents them as weak and incapable to fit the narrative of who deserves disability benefits. For them, receiving disability benefits is crucial to survive and pay for basic necessities.

The individuals I interviewed shared how such surveillance tools not only modify their behavior but also prevent them from exercising a whole range of human rights through social media. These rights are essential for all people but particularly for disabled individuals because the silencing of their voices strips away their ability to advocate for their community and form social relationships. Although social media offers avenues for socialization and political engagement to all social media users, social media significantly opens up opportunities to disabled individuals. Participants expressed that without social media they would be unable to form these relationships offline where accommodations for their disability do not exist. Disabled individuals greatly value sharing on social media as the medium enables them to highlight aspects of their identity beyond being disabled. An individual expressed to me how important social media is for socializing particularly during the Covid-19 pandemic, “I use Facebook mostly as a method of socializing especially right now with the pandemic going on, and occasionally political engagement.”Participants expressed that they feel like they need to modify their behavior on social media, with one participant saying, “I don’t think anybody feels good being monitored all the time and that’s essentially what I feel like now post-disability. I can’t have fun or it will be taken away.” This is fundamentally a human rights issue.

These human rights issues include equality in social life, and the ability to participate in the broader community online. Long-term these inequalities can harm their human rights as their voices and experiences are not taken into account by people outside of the disability community. In many reports on the disability community, the majority consensus rests on the fact that the exclusion of disabled people and their input undermines the well-being of disabled individuals. Ignoring or silencing the voices of disabled people prevents them from using their voices to advocate for themselves and participate in decisions involving their lives, making them vulnerable to disability discrimination, exclusion, violence, poverty and untreated health problems. For example, a participant I interviewed shared how the process reinforces disability discrimination through behavior modification:

There was no room for me to focus on anything I could still do. Because the disability process is exactly that, it’s finding out what you can’t do. You have to prove that your life sucks. That adds to the disability shame and stigma too. So anyways, dehumanizing.

In addition to the social and economic rights mentioned above, social media monitoring also impacts the enjoyment of civil and political rights for disabled individuals applying for and receiving Social Security disability benefits. Richards and Hartzog write, “Trust within information relationships is critical for free expression and a precursor to many kinds of political engagement.” They highlight how the Internet and social media have been used both for access to political information and political engagement, which has a large impact on politics in general. Participants revealed to me that they used social media as a primary method for engaging in activism and contributing to political thought. The individuals I interviewed shared that they use social media to engage with political representatives on disability-related legislation and to bring awareness of disability-related issues to their political representatives. Social media monitoring restricting freedom of expression can remove disabled individuals from participating in the political sphere and exercising other civil and political rights.

I am a disabled person who recently qualified for disability benefits, so I personally understand this pressure to prove I deserve the benefits and accommodations allocated to people who are “actually” disabled. Social media monitoring perpetuates this harmful narrative that disabled individuals applying for and receiving disability benefits need to prove their eligibility by modifying their behavior to fit disability stereotypes. This behavior modification restricts our access to form meaningful relationships, push against disability stigma and advocate for ourselves through political engagement. As social media monitoring pushes us out of social media platforms, our voices are silenced and this exclusion leads to further social inequalities. As disability rights activism continues to transform in the United States, I hope that this research will inspire future studies into disability rights, experiences applying for and receiving SSI and SSDI, and how they may intersect with human rights beyond privacy rights.

October 29, 2020. Sarah Tucker, Columbia University Human Rights graduate program. She uses her experiences as a disabled woman working in tech to advocate for the Disability community.

Risk Scoring Children in Chile

TECHNOLOGY & HUMAN RIGHTS

Risk Scoring Children in Chile

On March 30, 2022, Christiaan van Veen and Victoria Adelmant hosted the eleventh event in our “Transformer States” interview series on digital government and human rights. In conversation with human rights expert and activist Paz Peña, we examined the implications of Chile’s “Childhood Alert System,” an “early warning” mechanism which assigns risk scores to children based on their calculated probability of facing various harms. This blog picks up on the themes of the conversation. The video recording and additional readings can be found below.

The deaths of over a thousand children in privatized care homes in Chile between 2005 and 2016 have, in recent years, pushed the issue of child protection high onto the political agenda. The country’s limited legal and institutional protections for children have been consistently critiqued in the past decade, and calls for more state intervention, to reverse the legacies of Pinochet-era commitments to “hands-off” government, have been intensifying. On his first day in office in 2018, former president Sebastián Piñera promised to significantly strengthen and institutionalize state protections for children. He launched a National Agreement for Childhood; established local “childhood offices” and an Undersecretariat for Children; a law guaranteeing children’s rights was passed; and the Sistema Alerta Niñez (“Childhood Alert System”) was developed. This system uses predictive modelling software to calculate children’s likelihood of facing harm or abuse, dropping out of school, and other such risks.

Predictive modelling calculates the probabilities of certain outcomes by identifying patterns within datasets. It operates through a logic of correlation: where persons with certain characteristics experienced harm in the past, those with similar characteristics are likely to experience harm in the future. Developed jointly by researchers at Auckland University of Technology’s Centre for Social Data Analytics and the Universidad Adolfo Ibáñez’s GobLab, the Childhood Alert predictive modelling software analyzes existing government databases to identify combinations of individual and social factors which are correlated with harmful outcomes, and flags children accordingly. The aim is to “prioritize minors [and] achieve greater efficiency in the intervention.”

A skewed picture of risk

But the Childhood Alert System is fundamentally skewed. The tool analyzes databases about the beneficiaries of public programs and services, such as Chile’s Social Information Registry. It thereby only examines a subset of the population of children—those whose families are accessing public programs. Families in higher socioeconomic brackets—who do not receive social assistance and thus do not appear in these databases—are already excluded from the picture, despite the fact that children from these groups can also face abuse. Indeed, the Childhood Alert system’s developers themselves acknowledged in their final report that the tool has “reduced capability for identifying children at high risk from a higher socioeconomic level” due to the nature of the databases analyzed. The tool, from its inception and by its very design, is limited in scope and completely ignores wealthier groups.

The analysis then proceeds on a problematic basis, whereby socioeconomic disadvantage is equated with risk. Selected variables include: social programs of which the child’s family are beneficiaries; families’ educational backgrounds; socioeconomic measures from Chile’s Social Registry of Households; and a whole host of geographical variables, including the number of burglaries, percentage of single parent households, and unemployment rate in the child’s neighborhood. Each of these variables are direct measures of poverty. Through this design, children in poorer areas can be expected to receive higher risk scores. This is likely to perpetuate over-intervention in certain neighborhoods.

Economic and social inequalities, including significant regional disparities in living conditions, persist in Chile. As elsewhere, poverty and marginalization do not fall evenly. Women, migrants, those living in rural areas, and indigenous groups are more likely to live in poverty—those from indigenous groups have Chile’s highest poverty rates. As the Alert System is skewed towards low-income populations, it will likely disproportionately flag children from indigenous groups thus raising issues of racial and ethnic bias. Furthermore, the datasets used will also reflect inequalities and biases. Public datasets about families’ previous interactions with child protective services, for example, are populated through social workers’ inputs. Biases against indigenous families, young mothers, or migrants—reflected through disproportionate investigations or stereotyped judgments about parenting—will be fed into the database.

The developers of this predictive tool wrote in their evaluation that, while concerns about racial disparities “have been expressed in the context of countries like the United States, where there are greater challenges related to racism. In the local Chilean context, we frankly don’t see similar concerns about race.” As Paz Peña points out, this dismissal is “difficult to understand” in light of the evidence of racism and racialized poverty in Chile.

Predictive systems such as these are premised on linking individuals’ characteristics and circumstances with the incidence of harm. As Abeba Birhane puts it, such approaches by their nature “force determinability [and] create a world that resembles the past” through reinforcing stereotypes, because they attach risk factors to certain individual traits.

The global context

These issues of bias, disproportionality, and determinacy in predictive child welfare tools have already been raised in other countries. Public outcry, ethical concerns, and evidence that these tools simply do not work as intended, have led many such systems to be scrapped. In the United Kingdom, a local authority’s Early Help Profiling System which “translates data on families into risk profiles [of] the 20 families in most urgent need” was abandoned after it had “not realized the expected benefits.” The U.S. state of Illinois’ child welfare agency strongly criticized and scrapped its predictive tool which had flagged hundreds of children as 100% likely to be injured while failing to flag any of the children who did tragically die from mistreatment. And in New Zealand, the Social Development Minister prevented the deployment of a predictive tool on ethical grounds, purportedly noting: “These are children, not lab rats.”

But while predictive tools are being scrapped on grounds of ethics and ineffectiveness in certain contexts, these same systems are spreading across the Global South. Indeed, the Chilean case demonstrates this trend especially clearly. The team of researchers who developed Chile’s Childhood Alert System is the very same team whose modelling was halted by the New Zealand government due to ethical questions, and whose predictive tool for the U.S. state of Pennsylvania was the subject of high-profile and powerful critique by many actors including Virginia Eubanks in her 2018 book Automating Inequality.

As Paz Peña noted, it should come as no surprise that systems which are increasingly deemed too harmful in some Global North contexts are proliferating in the Global South. These spaces are often seen as an “easier target,” with lower chances of backlash than places like New Zealand or the United States. In Chile, weaker institutions resulting from the legacies of military dictatorship and the staunch commitment to a “subsidiary” (streamlined, outsourced, neoliberal) state may be deemed to provide more fertile ground for such systems. Indeed, the tool’s developers wrote in a report that achieving acceptance of the system in Chile would be “simpler as it is the citizens’ custom to have their data processed to stratify their socioeconomic status for the purpose of targeting social benefits.”

This highlights the indispensability of international comparison, cooperation, and solidarity. Those of us working in this space must pay close attention to developments around the world as these systems continue to be hawked at breakneck speed. Identifying parallels, sharing information, and collaborating across constituencies is vital to support the organizations and activists who are working to raise awareness of these systems.

April 20, 2022. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Regulating Artificial Intelligence in Brazil

TECHNOLOGY & HUMAN RIGHTS

Regulating Artificial Intelligence in Brazil

On May 25, 2023, the Center for Human Rights and Global Justice’s Technology & Human Rights team hosted an event entitled Regulating Artificial Intelligence: The Brazilian Approach, in the fourteenth episode of the “Transformer States” interview series on digital government and human rights. This in-depth conversation with Professor Mariana Valente, a member of the Commission of Jurists created by the Brazilian Senate to work on a draft bill to regulate artificial intelligence, raised timely questions about the specificities of ongoing regulatory efforts in Brazil. These developments in Brazil may have significant global implications, potentially inspiring other more creative, rights-based, and socio-economically grounded regulation of emerging technologies in the Global South.

In recent years, numerous initiatives to regulate and govern Artificial Intelligence (AI) systems have arisen in Brazil. First, there was the Brazilian Strategy for Artificial Intelligence (EBIA), launched in 2021. Second, legislation known as Bill 21/20, which sought to specifically regulate AI, was approved by the House of Representatives in 2021. And in 2022, a Commission of Jurists was appointed by the Senate to draft a substitute bill on AI. This latter initiative holds significant promise. While the EBIA and Bill 21/20 were heavily criticized for the limited value given to public input in comparison to the available participatory and multi-stakeholder mechanisms, the Commission of Jurists took specific precautions to be more open to public input. Their proposed alternative draft legislation, which is grounded in Brazil’s socio-economic realities and legal tradition, may inspire further legal regulation of AI, especially for the Global South, considering Brazil’s position in other discussions related to internet and technology governance.

Bill 21/20 was the first bill directed specifically at AI. But this was a very minimal bill; it effectively established that regulating AI should be the exception. It was also based on a decentralized model, meaning that each economic sector would regulate its own applications of AI: for example, the federal agency dedicated to regulating the healthcare sector would regulate AI applications in that sector. There were no specific obligations or sanctions for the companies developing or employing AI, and there were some guidelines for the government on how it should promote the development of AI. Overall, the bill was very friendly to the private sector’s preference for the most minimal regulation possible. The bill was quickly approved in the House of Representatives, without public hearings or much public attention.

It is important to note that this bill does not exist in isolation. There is other legislation that applies to AI in the country, such as consumer law and data protection law, as well as the Marco Civil da Internet (Brazilian Civil Rights Framework for the Internet). These existing laws have been leveraged by civil society to protect people from AI harms. For example, Instituto Brasileiro de Defesa do Consumidor (IDEC), a consumer rights organization, successfully brought a public civil action using consumer protection legislation against Via Quatro, a private company responsible for the subway line 4-Yellow of Sao Paulo. The company was fined R$500,000 for collecting and processing individuals’ biometric data for advertising purposes without informed consent.

But, given that Bill 21/20 sought to specifically address the regulation of AI, academics and NGOs raised concerns that it would reduce the legal protections afforded in Brazil: it “gravely undermines the exercise of fundamental rights such as data protection, freedom of expression and equality” and “fails to address the risks of AI, while at the same time facilitating a laissez-faire approach for the public and private sectors to develop, commercialize and operate systems that are far from trustworthy and human-centric (…) Brazil risks becoming a playground for irresponsible agents to attempt against rights and freedoms without fearing for liability for their acts.”

As a result, the Senate decided that instead of voting on Bill 21/20, they would create a Commission of Jurists to propose a new bill.

The Commission of Jurists and the new bill

The Commission of Jurists was established in April 2022 and delivered its final report in December 2022. Even if the establishment of the Commission was considered a positive development, it was not exempt from criticism from civil society, for the lack of racial and regional diversity of the Commission’s membership, as well as the need for different areas of knowledge to contribute to the debate. This criticism comes from a reflection of the socio-economic realities of Brazil, which is one of the most unequal countries in the world, and those inequalities are intersectional, considering race, gender, income, territorial origin. Therefore, AI applications will have different effects on different segments of the population. This is already clear from the use of facial recognition in public security: more than 90% of the individuals arrested using this technology were Black. Another example is the use of an algorithm to evaluate requests for emergency aid amid the pandemic, where many vulnerable people had their benefits denied based on incorrect data.

During its mandate, the Commission of Jurists held public hearings, invited specialists from different areas of knowledge, and developed a public consultation mechanism allowing for written proposals. Following this process, the new proposed bill had several elements that were very different from Bill 21/20. First, the new bill borrows from the EU’s AI Act by adopting a risk-based approach: obligations are distinguished according to the risks they pose. However, the new bill, following the Brazilian tradition of structuring regulation from the perspective of individual and collective rights, merges the European risk-based approach with a rights-based approach. The bill confers individual and collective rights that apply in relation to all AI systems, independent of the level of risk they pose.

Secondly, the new bill includes some additional obligations for the public sector, considering its differential impact on people’s rights. For example, there is a ban on the treatment of racial information, and provisions on public participation in decisions regarding the adoption of these systems. Importantly, though the Commission discussed the inclusion of a complete ban on facial recognition technologies in public spaces for public security, this proposal was not included: instead, the bill included a moratorium, establishing that a law must be approved regulating this use.

What the future holds for AI regulation in Brazil

After the Commission submitted its report, in May 2023 the president of the Senate presented a new bill for AI regulation replicating the Commission’s proposal. On 16th August 2023, the Senate established a temporary internal commission to discuss the different proposals for AI regulation that have been presented in the Senate to date.

It is difficult to predict what will happen following the end of the internal commission’s work, as political decisions will shape the next developments. However, what is important to have in mind is the progress that the discussion has reached so far, from an initial bill that was very minimal in scope, and supported the idea of minimal regulation, to one that is much more protective of individual and collective rights and considerate of Brazil’s particular socio-economic realities. Brazil has played an important progressive role historically in global discussions on the regulation of emerging technologies, for example with the discussions of its Marco Civil da Internet. As Mariana Valente put it, “Brazil has had in the past a very strong tradition of creative legislation for regulating technologies.” The Commission of Jurists’ proposal repositions Brazil in such a role.

September 28, 2023. Marina Garrote, LLM program, NYU School of Law whose research interests lie at the intersection of digital rights and social justice. Marina holds a bachelor and master’s degree from Universidade de São Paulo and previously worked at Data Privacy Brazil, a civil society association dedicated to public interest research on digital rights.

Putting Profit Before Welfare: A Closer Look at India’s Digital Identification System

TECHNOLOGY & HUMAN RIGHTS

Putting Profit Before Welfare: A Closer Look at India’s Digital Identification System 

Aadhaar is the largest national biometric digital identification program in the world, with over 1.2 billion registered users. While the poor have been used as a “marketing strategy” for this program, the “real agenda” is the pursuit of private profit.

Over the past months, the Digital Welfare State and Human Rights Project’s “Transformer States” conversations have highlighted the tensions and deceits that underlie attempts by governments around the world to digitize welfare systems and wider attempts to digitize the state. On January 27, 2021, Christiaan van Veen and Victoria Adelmant explored the particular complexities and failures of Aadhaar, India’s digital identification system, in an interview with Dr. Usha Ramanathan, a recognized human rights expert.

What is Aadhaar?

Aadhaar is the largest national digital identification program in the world; over 1.2 billion Indian residents are registered and have been given unique Aadhaar identification numbers. In order to create an Aadhaar identity, individuals must provide biometric data including fingerprints, iris scans, facial photographs, and demographic information including name, birthdate and address. Once an individual is set up in the Aadhaar system (which can be complicated depending on whether the individual’s biometric data can be gathered easily, where they live and their mobility), they can use their Aadhaar number to access public and, increasingly, private services. In many instances, accessing food rations, opening a bank account, and registering a marriage all require an individual to authenticate through Aadhaar. Authentication is mainly done by scanning one’s finger or iris, though One-Time Passcodes or QR codes can also be used.

The welfare “façade”

Unique Identification Authority of India (UIDAI) is the government agency responsible for administering the Aadhaar system. Its vision, mission, and values include empowerment, good governance, transparency, efficiency, sustainability, integrity and inclusivity. UIDAI has stated that Aadhaar is intended to facilitate “inclusion of the underprivileged and weaker sections of the society and is therefore a tool of distributive justice and equality.” Like many of the digitization schemes examined in the Transformer States series, the Aadhaar project promised all Indians formal identification that would better enable them to access welfare entitlements. In particular, early government statements claimed that many poorer Indians did not have any form of identification, therefore justifying Aadhaar as a way for them to access welfare. However, recent research suggests that less than 0.03% of Indian residents did not have formal identification such as birth certificates.

Although most Indians now have an Aadhaar “identity,” the Aadhaar system fails to live up to its lofty promises. The main issues preventing Indians from effectively claiming their entitlements are:

  • Shifting the onus of establishing authorization and entitlement onto citizens. A system that is supposed to make accessing entitlements and complying with regulations “straightforward” or “efficient” often results in frustrating and disempowering rejections or denials of services. The government asserts that the system is “self-cleaning,” which means that individuals have to fix their identity record themselves. For example, they must manually correct errors in their name or date of birth, despite not always having resources to do so.
  • Concerns with biometrics as a foundation for the system. When the project started, there was limited data or research on the effectiveness of biometric technologies for accurately establishing identity in the context of developing countries. However, the last decade of research reveals that biometric technologies do not work well in India. It can be impossible to reliably provide a fingerprint in populations with a substantial proportion of manual laborers and agricultural workers, and in hot and humid environments. Given that biometric data is used for both enrolment and authentication, these difficulties frustrate access to essential services on an ongoing basis.

Given these issues, Usha expressed concern that the system, initially presented as a voluntary program, is now effectively compulsory for those who depend on the state for support.

Private motives against the public good

The Aadhaar system is therefore failing the very individuals it was purported to be designed to help. The poorest are used as a “marketing strategy,” but it is clear that private profit is, and always was, the main motivation. From the outset, the Aadhaar “business model” would benefit private companies by growing India’s “digital economy” and creating a rich and valuable dataset. In particular, it was envisioned that the Aadhaar database could be used by banks and fintech companies to develop products and services, which further propelled the drive to get all Indians onto the database. Given the breadth and reach of the database, it is an attractive asset to private enterprises for profit-making and is seen as providing the foundation for the creation of an “Indian Silicon Valley.” Tellingly, the acronym “KYC,” used by UIDAI to assert that Aadhaar would help the government “know your citizen” is now understood as “know your customer.”

Protecting the right to identity

The right to identity cannot be confused with identification. Usha notes that “identity is complex and cannot be reduced to a number or a card,” because doing so empowers the data controller or data system to effectively choose whether to recognize the person seeking identification, or to “paralyse” their life by rejecting, or even deleting, their identification number. History shows the disastrous effects of using population databases to control and persecute individuals and communities, such as during the Holocaust and the Yugoslav Wars. Further, risks arise from the fact that identification systems like Aadhaar “fix” a single identity for individuals. Parts of a person’s identity that they may wish to keep separate—for example, their status as a sex worker, health information, or socio-economic status—are combined in a single dataset and made available in a variety of contexts, even if that data may be outdated, irrelevant, or confidential.

Usha concluded that there is a compelling need to reconsider and redraw attempts at developing universal identification systems to ensure they are transparent, democratic, and rights-based. They must, from the outset, prioritize the needs and welfare of people over claims of “efficiency,” which in reality, have been attempts to obtain profit and control.

February 15, 2021. Holly Ritson, LLM program, NYU School of Law; and Human Rights Scholar with the Digital Welfare State and Human Rights Project.

On the Frontlines of the Digital Welfare State: Musings from Australia

TECHNOLOGY & HUMAN RIGHTS

On the Frontlines of the Digital Welfare State: Musings from Australia

Welfare beneficiaries are in danger of losing their payments to “glitches” or because they lack internet access. So why is digitization still seen as the shiny panacea to poverty?

I sit here in my local pub in South Australia using the Wi-Fi, wondering whether this will still be possible next week. A month ago, we were in lockdown, but my routine for writing required me to leave the house because I did not have reliable internet at home.

Not having internet may seem alien to many. When you are in a low-income bracket, things people take for granted become huge obstacles to navigate. This is becoming especially apparent as social security systems are increasingly digitized. Not having access to technologies can mean losing access to crucial survival payments.

A working phone with internet data is required to access the Australian social security system. Applicants must generally apply for payments through the government website, which is notorious for crashing. When the pandemic hit, millions of the newly-unemployed were outraged that they could not access the website. Those of us already receiving payments just smiled wryly; we are used to this. We are told to use the website, but then it crashes, so we call and are put on hold for an hour. Then we get cut off and have to call back. This is normal. You also need a phone to fulfill reporting obligations. If you don’t have a working phone, or your battery dies, or your phone credit runs out, your payment can be suspended through the assumption that you’re deliberately shirking your reporting obligations.

In the last month, I was booted off my social security disability employment service. Although I had a certified disability affecting my job-seeking ability, the digital system had unceremoniously dumped me onto the regular job-seeking system, which punishes people for missing appointments. Unfortunately, the system had “glitched,” a popular term used by those in power for when payment systems fail. After narrowly missing a scheduled phone appointment, my payment was suspended indefinitely. Phone calls of over an hour didn’t resolve it; I didn’t even get to speak to a person, who could have resolved the issue. This is the danger of trusting digital technology above humans.

This is also the huge flaw in Income Management (IM), the “banking system” through which social security payments are controlled. I put “banking system” in quotation marks because it’s not run by a bank; there are none of the consumer protections of financial institutions, nor the choice to move if you’re unhappy with the service. The cashless welfare card is a tool for such IM: beneficiaries on the card can only withdraw 20% of their payment as cash, and the card restricts how the remaining 80% can be spent (for example, purchases of alcohol and online retailers like eBay are restricted). IM was introduced in certain rural areas of Australia deemed “disadvantaged” by the government.

The cashless welfare card is operated by Indue, a company contracted by the Australian government to administer social security payments. This is not a company with a good reputation for dealing with vulnerable populations. It is a monolith that is almost impossible to fight. Indue’s digital system can’t recognize rent cycles, meaning after a certain point in the month, the ‘limit’ for rent can be reached and a rent debit rejected. People have had to call and beg Indue to let them pay their landlords; others have been made homeless when the card stopped them from paying rent. They are stripped of agency over their own lives. They can’t use their own payments for second-hand school uniforms, or community fêtes, or buying a second-hand fridge. When you can’t use cash, avenues of obtaining cheaper goods are blocked off.

Certain politicians tout the cashless welfare card as a way to stop the poor from spending on alcohol and drugs. In reality, the vast majority affected by this system have no such problems with addiction. But when you are on the card, you are automatically classified as someone who cannot be trusted with your own money; an addict, a gambler, a criminal.

Politicians claim it’s like any other card, but this is a lie. It makes you a pariah in the community and is a tacit license for others to judge you. When you are at the whim and mercy of government policy, when you are reliant on government payments controlled by a third party, you are on the outside looking in. You’re automatically othered; you’re made to feel ashamed, stupid, and incapable.

Beyond this stigma, there are practical issues too. The cashless welfare card system assumes you have access to a smartphone and internet to check your account balance, which can be impossible for those with low incomes. Pandemic restrictions close the pubs, universities, cafes, and libraries which people rely on for internet access. Those without access are left by the wayside. “Glitches” are also common in Indue accounts: money can go missing without explanation. This ruins account-holders’ plans and forces them to waste hours having non-stop arguments with brick-wall bureaucracy and faceless people telling them they don’t have access to their own money.

Politicians recently had the opportunity to reject this system of brutality. The “Cashless Welfare Card trials” were slated to end on December 31, 2020, and a bill was voted on to determine if these “trials” would continue. The people affected by this system already told politicians how much it ruins their lives. Once again, they used their meager funds to call politicians’ offices and beg them to see the hell they’re experiencing. They used their internet data to email and rally others to do the same. I personally delivered letters to two politicians’ offices, complete with academic studies detailing the problems with IM. For a split second, it seemed like the politicians listened and some even promised to vote to end the trials. But a last-minute backroom deal meant that these promises were broken. Lived experiences of welfare recipients did not matter.

The global push to digitize welfare systems must be interrogated. When the most vulnerable in society are in danger of losing their payments to “glitches” or because they lack internet access, it begs the question: why is digitization still seen as the shiny panacea to poverty?

February 1, 2021. Nijole Naujokas, an Australian activist and writer who is passionate about social justice rights for the vulnerable. She is the current Secretary of the Australian Unemployed Workers’ Union, and is doing her Bachelor of Honors in Creative Writing at The University of Adelaide.