I don’t see you, but you see me: asymmetric visibility in Brazil’s Bolsa Família Program

TECHNOLOGY & HUMAN RIGHTS

I don’t see you, but you see me: asymmetric visibility in Brazil’s Bolsa Família Program

Brazil’s Bolsa Família Program, the world’s largest conditional cash transfer program, is indicative of broader shifts in data-driven social security. While its beneficiaries are becoming “transparent” as their data is made available, the way the State uses beneficiaries’ data is increasingly opaque.

“She asked a lot of questions and started filling out the form. When I asked her about when I was going to get paid, she said, ‘That’s up to the Federal Government.’” This experience of applying for Brazil’s Bolsa Família Program (“Programa Bolsa Família” in Portuguese, or PBF), the world’s largest conditional cash transfer program, hints at the informational asymmetries between individuals and the State. Such asymmetries have long existed, but information and communications technologies (ICTs) can exacerbate these imbalances. ICTs enable States to handle an increasing amount of personal data, and this is especially true in the PBF. In June 2020, 14.2 million Brazilian families living in poverty – 43.7 million individuals – were beneficiaries of the Bolsa Família program.

At the core of the PBF’s structure is a register called CadÚnico, which is used for more than 20 social policies. It includes detailed data on heads of households and less granular data on other family members. The law designates women as the heads of household and thereby the main PBF beneficiary. Information is collected about income, number of people living together, level of education and literacy, housing conditions, access to work, disabilities, and ethnic groups. This data is used to select PBF beneficiaries and to monitor their compliance with the conditions on which the maintenance of the benefit depends, such as requirements that children attend school . The federal government also uses the CadÚnico for finding multidimensional vulnerabilities, granting other benefits, or enabling research. Although different programs feed the CadÚnico, the PBF is its most important information provider due to its colossal size. In March 2021, the CadÚnico comprised 75.2 million individual entries from 28.9 million families: PBF beneficiaries make up a half.

The person responsible for the family unit within the PBF must answer all of the entries of the “main form,” which consists of 77 questions with varying degrees of detail and sensitivity. All these data points expose the sensitive personal information and vulnerabilities of low-income individuals.

The scope of this large and comprehensive dataset is celebrated by social policy experts because it enables the State to target needs for other policies. Indeed, the CadÚnico has been used to identify the relevant beneficiaries for policies ranging from electricity tariff discounts to higher education subsidies. Holding huge amounts of information about low-income individuals can allow States to proactively target needs-based policies.

But when the State is not guided by the principle of data minimization (i.e. collecting only the necessary data and no more), this appetite for information increases and places the burden of risks on individuals. They are transparent to the State, while the State becomes increasingly opaque to them.

Upon registering for the PBF, citizens are not informed about what will happen to the information they provide. For example, the training materials for officials registering beneficiaries only note that they must warn potential beneficiaries of their liability for providing false and inaccurate information, but they do not state that officials must tell beneficiaries how their data will be used, nor about their data rights , nor any details about when or whether they might receive their cash transfer. The emphasis, therefore, lies on the responsibilities of the potential beneficiary instead of the State. The lack of transparency about how people’s data will be used reduces citizens’ ability to exercise their rights.

In addition to the increased visibility of recipients to the State, the PBF also releases the beneficiaries’ data to the public due to strict transparency requirements. Though CadÚnico data is generally confidential, PBF recipients’ personal data is publicly available through different paths:

  • The Federal Government’s Transparency Portal publishes a monthly list containing the beneficiary’s name, municipality, NIS (social security number) and the amounts paid.
  • The Caixa Econômica Federal’s portal– the public bank that administers social benefits–allows anyone to check the status of the benefit by inserting name, NIS and CPF (taxpayer’s identity number).
  • The NIS of any citizen can be queried at the Citizen’s Consultation Portal CadÚnico by providing name, mother’s name, and birth date.

In making a person’s status as a PBF beneficiary easily accessible, the (mostly female) beneficiaries suffer a lack of privacy from all sides and are stigmatized. Not only are they surveilled by the State as it closely monitors conditionalities for the PBF, but they are also monitored by fellow citizens. Citizens have made complaints to the PBF about beneficiaries they believe should not receive cash transfers. At InternetLab, we used the Brazilian Access to Information Law to gain access to some of these complaints. 60% of the complaints showed personal identification information about the accused beneficiary, suggesting that citizens are monitoring and reporting their “undeserving” neighbors and using the above portals to check databases.

The availability of this data has further worrying consequences: at InternetLab, we have witnessed several instances of fraud and electoral propaganda directed at PBF beneficiaries’ phones, and it is not clear where this contact data came from. Different actors are profiling and targeting Brazilian citizens according to their socio-economic vulnerabilities.

The public availability of beneficiaries’ data is backed by law and arises from a desire to fight corruption in Brazil. This requires government spending, including on social programs, to be transparent. But spending on social programs has become more controversial in recent years amidst an economic crisis and the rise of conservative political majorities, and misplaced ideas of “corrupted beneficiaries” have mingled with anti-corruption sentiments. The emphasis has been placed on making beneficiaries “transparent,” rather than government.

Anti-corruption laws do not adequately differentiate between transparency practices that confront corruption and favor democracy, and those which disproportionately reinforce vulnerabilities and inequalities in focusing on recipients of social programs. Public contracts, public employees’ salaries, and beneficiaries of social benefits are all exposed under the same grounds. But these are substantially different uses of public resources, and exposure of these different kinds of data has very unequal impacts, with beneficiaries more likely to be harmed by this “transparency.”

The personal data of social program beneficiaries should be treated with more care, and we should question whether disclosing so much information about them is necessary. In the wake of Brazil’s General Data Protection Law which came into force last year, it is vital that the work to increase the transparency of the State continues while the privacy of the vulnerable is protected, not the other way around.

May 3, 2021. Nathalie Fragoso and Mariana Valente.
Nathalie Fragoso, Head of Research, Privacy and Surveillance, Internet Lab.
Mariana Valente, Associate Director of Internet Lab.

Social Credit in China: Looking Beyond the “Black Mirror” Nightmare

TECHNOLOGY & HUMAN RIGHTS

Social Credit in China: Looking Beyond the “Black Mirror” Nightmare

The Chinese government’s Social Credit program has received much attention from Western media and academics, but misrepresentations have led to confusion over what it truly entails. Such mischaracterizations unhelpfully distract from the dangers and impacts of the realities of Social Credit. On March 31, 2021, Christiaan Van Veen and I hosted the sixth event in the Transformer States conversation series, which focuses on the human rights implications of the emerging digital state. We interviewed Dr. Chenchen Zhang, Assistant Professor at Queen’s University Belfast, to explore the much-discussed but little-understood Social Credit program in China.

Though the Chinese government’s Social Credit program has received significant attention from Western media and rights organizations, much of this discussion has often misrepresented the program. Social Credit is imagined as a comprehensive, nation-wide system in which every action is monitored and a single score is assigned to each individual, much like a Black Mirror episode. This is in fact quite far from reality. But this image has become entrenched in the West, as discussions and some academic debate has focused on abstracted portrayals of what Social Credit could be. In addition, the widely-discussed voluntary, private systems run by corporations, such as Alipay’s Sesame Credit or Tencent’s WeChat score, are often mistakenly conflated with the government’s Social Credit program.

Jeremy Daum has argued that these widespread misrepresentations of Social Credit serve to distract from examining “the true causes for concern” within the systems actually in place. They also distract from similar technological developments occurring in the West, which seem acceptable by comparison. An accurate understanding is required to acknowledge the human rights concerns that this program raises.

The crucial starting point here is that the government’s Social Credit system is a heterogeneous assemblage of fragmented and decentralized systems. Central government, specific government agencies, public transport networks, municipal governments, and others are experimenting with diverse initiatives with different aims. Indeed, xinyong, the term which is translated as “credit” in Social Credit, encompasses notions of financial creditworthiness, regulatory compliance, and moral trustworthiness, therefore covering programs with different visions and narratives. A common thread across these systems is a reliance on information-sharing and lists to encourage or discourage certain behaviors, including blacklists to “shame” wrongdoers and “redlists” publicizing those with a good record.

One national-level program called the Joint Rewards and Sanctions mechanism shares information across government agencies about companies which have violated regulations. Once a company is included on one agency’s blacklist for having, for example, failed to pay migrant workers’ wages, other agencies may also sanction that company and refuse to grant it a license or contract. But blacklisting mechanisms also affect individuals: the People’s Court of China maintains a list of shixin (dishonest) people who default on judgments. Individuals on this list are prevented from accessing “non-essential consumption” (including travel by plane or high-speed train) and their names are published, adding an element of public shaming. Other local or sector-specific “credit” programs aim at disciplining individual behavior: anyone caught smoking on the high-speed train is placed on the railway system’s list of shixin persons and subjected to a 6-month ban from taking the train. Localized “citizen scoring” schemes are also being piloted in a dozen cities. Currently, these resemble “club membership” schemes with minor benefits and have low sign-up rates; some have been very controversial. In 2019, in response to controversies, the National Development and Reform Commission issued guidelines stating that citizen scores must only be used for incentivizing behavior and not as sanctions or to limit access to basic public services. Presently, each of the systems described here are separate from one another.

But even where generalizations and mischaracterizations of Social Credit are dispelled, many aspects nonetheless raise significant concerns. Such systems will, of course, worsen issues surrounding privacy, chilling effects, discrimination, and disproportionate punishment. These have been explored at length elsewhere, but this conversation with Chenchen raised additional important issues.

First, a stated objective behind the use of blacklists and shaming is the need to encourage compliance with existing laws and regulations, since non-compliance undermines market order. This is not a unique approach: the US Department of Labor names and shames corporations that violate labor laws, and the World Bank has a similar mechanism. But the laws which are enforced through Social Credit exist in and constitute an extremely repressive context, and these mechanisms are applied to individuals. An individual can be arrested for protesting labor conditions or for speaking about certain issues on social media, and systems like the People’s Court blacklist amplify the consequences of these repressive laws. Mechanisms which “merely” seek to increase legal compliance are deeply problematic in this context.

Second, as with so many of the digital government initiatives discussed in the Transformer States series, Social Credit schemes exhibit technological solutionism which invisibilizes the causes of the problems they seek to address. Non-payment of migrant workers’ wages, for example, is a legitimate issue which must be tackled. But in turning to digital solutions such as an app which “scores” firms based on their record of wage payments, a depoliticized technological fix is promised to solve systemic problems. In the process, it obscures the structural reasons behind migrant workers’ difficulties in accessing their wages, including a differentiated citizenship regime that denies them equal access to social provisions.

Separately, there are disparities in how individuals in different parts of the country are affected by Social Credit. Around the world, governments’ new digital systems are consistently trialed on the poorest or most vulnerable groups: for example, smartcard technology for quarantining benefit income in Australia was first introduced within indigenous communities. Similarly, experimentation with Social Credit systems is unequally targeted, especially on a geographical basis. There is a hierarchy of cities in China with provincial-level cities like Beijing at the top, followed by prefectural-level cities, county-level cities, then towns and villages. A pattern is emerging whereby smaller or “lower-ranked” cities have adopted more comprehensive and aggressive citizen scoring schemes. While Shanghai has local legislation that defines the boundaries of its Social Credit scheme, less-known cities seeking to improve their “branding” are subjecting residents to more arbitrary and concerning practices.

Of course, the biggest concern surrounding Social Credit relates to how it may develop in the future. While this is currently a fragmented landscape of disparate schemes, the worry is that these may be consolidated. Chenchen stated that a centralized, nationwide “citizen scoring” system remains unlikely and would not enjoy support from the public or the Central Bank which oversees the Social Credit program. But it is not out of the question that privately-run schemes such as Sesame Credit might eventually be linked to the government’s Social Credit system. Though the system is not (yet) as comprehensive and coordinated as has been portrayed, its logics and methodologies of sharing ever-more information across siloes to shape behaviors may well push in this direction, in China and elsewhere.

April 20, 2021. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Everyone Counts! Ensuring that the human rights of all are respected in digital ID systems

TECHNOLOGY & HUMAN RIGHTS

Everyone Counts! Ensuring that the human rights of all are respected in digital ID systems

The Everyone Counts! initiative was launched in the fall of 2020 with a firm commitment to a simple principle: the digital transformation of the state can only qualify as a success if everyone’s human rights are respected. Nowhere is this more urgent than in the context of so-called digital ID systems.

Research, litigation and broader advocacy on digital ID in countries like India and Kenya has already revealed the dangers of exclusion from digital ID for ethnic minority groups[1] and for people living in poverty.[2] However, a significant gap still exists between the magnitude of the human rights risks involved and the urgency of research and action on digital ID in many countries. Despite their active promotion and use by governments, international organizations and the private sector, in many cases we simply do not know how these digital ID systems lead to social exclusion and human rights violations, especially for the poorest and most marginalized.

Therefore, the Everyone Counts! initiative aims to engage in both research and action to address social exclusion and related human rights violations that are facilitated by government-sponsored digital ID systems.

Does the emperor have new clothes? The yawning evidence gap on digital ID

The common narrative behind the rush towards digital ID systems, especially in the Global South, is by now familiar: “As many as 1 billion people across the world do not have basic proof of identity, which is essential for protecting their rights and enabling access to services and opportunities.”[3] Digital ID is presented as a key solution to this problem, while simultaneously promising lower income countries the opportunity to “leapfrog” years of development via digital systems that assist in “improving governance and service delivery, increasing financial inclusion, reducing gender inequalities by empowering women and girls, and increasing access to health services and social safety nets for the poor.”[4]

This perspective, for which the World Bank and its Identification for Development (ID4D) Initiative have become the official “anchor” internationally, presents digital ID systems as a force for good. The Bank acknowledges that exclusionary issues may arise, but is confident that such issues may be overcome through good intentions and safeguards. Digging underneath the surface of these confident assertions, however, one finds that there appears to be remarkably little research into the overall impact of digital ID systems on social exclusion and a range of related human rights. For instance, after entering the digital ID space in 2014, publishing prolifically, and guiding billions of development dollars into furthering this agenda, the World Bank’s ID4D team concedes in its 2020 Annual Report that “given that this topic is relatively new to the development agenda, empirical research that rigorously evaluates the impact of ID systems on development outcomes and the effectiveness of strategies to mitigate risks has been limited.”[5] In other words, despite warning signs from several countries around the world, including chilling stories of people who have died because they were shut out of biometric ID systems,[6] the digital ID agenda moves full steam ahead without full understanding of its exclusionary potential.

Making sure that everyone truly counts

While the Everyone Counts! initiative only has a fraction of the resources of ID4D, we hope to inject some much needed reality into this discourse through our work. We will do this by undertaking–together with research partners in different countries–empirical human rights research that investigates how the introduction of a digital ID system leads to or exacerbates social exclusion. For example, we are currently undertaking a joint research project with Ugandan research partners focused on Uganda’s digital ID system, Ndaga Muntu, and its impact on poor women’s right to health, and older persons’ right to social assistance.

Our presence at a leading university and law school underlines our commitment to high quality and cutting-edge research, but we are not in the business of knowledge accumulation purely for its own sake. We will aim to transform our research into action. This could come in the form of strategic litigation and advocacy, such as the work by our partners described below, or in the form of network building and information sharing. For instance, together with co-sponsors like the UN Economic Commission for Africa (UNECA) and the Open Society Justice Initiative (OSJI), we are hosting a workshop series for African civil society organizations on digital ID and exclusion. The series creates a space where activists hoping to resist the exclusion associated with digital ID can come together, gain access to tools, information and networks, and form a community of practice that facilitates further activism.

Ensuring non-discriminatory access to vaccines: An early case study 

A recent example from Uganda demonstrates just how effective targeted action against digital ID systems can be. The government began rollout of its national digital ID system Ndaga Muntu as early as 2015, and it has gradually become a mandatory requirement to access a range of social services in Uganda.

To address the threat of COVID-19, the Ugandan government recently began a free, national vaccine program. One of the groups eligible to receive the vaccine would be all adults over the age of 50. On March 2, however, the Ugandan Minister of Health announced that only those Ugandan citizens who could produce a Ndaga Muntucard, or at least a national ID number (NIN), would be able to receive the vaccine. Conservative estimates suggest that over 7 million eligible Ugandans have not yet received their national ID card.

Our research partners, the Initiative for Social and Economic Rights (ISER) and Unwanted Witness (UW), sued the Ugandan government on March 5 to challenge the mandatory requirement of the Ndaga Muntu.[7] They argued that not only would the requirement of the national ID in exclude millions of eligible older persons from receiving the vaccine, but also that it would set a dangerous precedent that would allow for further discrimination in other areas of social services.[8]

On March 9, the Ministry of Health announced that it would change the national ID requirement so that alternative forms of identification documents, which are much more accessible to poor Ugandans, could be used to access the COVID-19 vaccine.[9] This was a critical victory for the millions of Ugandans who seek access to the life-saving vaccine–but it is also a warning sign of the subtle and pernicious ways that the digital ID system may be used to exclude.

Humans first, not systems first

The Ugandan case study shows the urgent need for the human rights movement to engage in discussions about digital transformation so that fundamental rights are not lost in the rush to build a “modern, digital state.” In our work on this initiative, we will remain similarly committed to prioritizing how individual human beings are affected by digital ID systems. Listening to their stories, understanding the harms they experience, and channeling their anger and frustration to other, more privileged and powerful audiences, is our core purpose.

Digital transformation is a field prone to a utilitarian logic: “if 99% of the population is able to register for a digital ID system, we should celebrate it as a success.” Our qualitative work does not only challenge the supposed benefits for these 99%, but emphasizes that the remaining 1% equals a multitude of individual human beings who may be victimized. Our research so far has only confirmed our intuition that digital ID systems can deliver significant harms, particularly for those who are poorest, most vulnerable, and least powerful in society. These excluded voices deserve to be heard and to become a decisive factor in deciding the shape of our digital future.

April 6, 2021. Christiaan van Veen and Katelyn Cioffi.

Christiaan van Veen, Director of the Digital Welfare State and Human Rights Project (2019-2022) at the Center for Human Rights and Global Justice at NYU School of Law. 

Katelyn Cioffi, Senior Research Scholar, Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law.

Marketizing the digital state: the failure of the ‘Verify’ model in the United Kingdom

TECHNOLOGY & HUMAN RIGHTS

Marketizing the digital state: the failure of the ‘Verify’ model in the United Kingdom

Verify, the UK government’s digital identity program, sought to construct a market for identity verification in which companies would compete. But the assumption that companies should be positioned between government and individuals who are trying to access services has gone unquestioned.

The story of the UK government’s Verify service has been told as one of outright failure and a colossal waste of money. Intended as the single digital portal through which individuals accessing online government services would prove their identity, Verify underperformed for years and is now effectively being replaced. But accounts of its demise often focus on technical failures and inter-departmental politics, rather than evaluating the underlying political vision that Verify represents. This is a vision of market creation, whereby the government constructs a market for identity verification within which private companies can compete. As Verify is replaced and the UK government’s ‘digital transformation’ continues, the failings of this model must be examined.

Whether an individual wants to claim a tax refund from Her Majesty’s Revenue and Customs, renew her driver’s license through the Driver and Vehicle Licensing Agency, or receive her welfare payment from the Department for Work and Pensions, the government’s intention was that she could prove her identity to any of these bodies through a single online platform: Verify. This was a flagship project of the Government Digital Service (GDS), a unit working across departments to lead the government’s digital transformation. Much of GDS’ work was driven by the notion of ‘government as a platform’: government should design and build “supporting infrastructure” upon which others can build.

Squarely in line with this idea, Verify provides a “platform for identity.” GDS technologists wrote the software for the Verify platform, while the government then accredits companies as ‘identity providers’ (IdPs) which ‘plug into’ the platform to compete. An individual who seeks to access a government service online will see Verify on her screen and will be prompted by Verify to choose an identity provider. She will be redirected to that IdP’s website and must enter information such as her passport number or bank details. The IdP then checks this information against public and private databases before confirming her identity to the government service being requested. The individual therefore leaves the government website to verify her identity with a separate, private entity.

As GDS “didn’t think there was a market,” it aimed to support “the development of a digital identity market that spans both public and private sectors” so that users could “use their verified identity accounts for private sector transactions as well as government services.” After Verify went live in 2016, the government accredited seven IdPs, including credit reporting agency Experian and Barclays bank. Government would pay IdPs per user, with the price per user decreasing as user volumes increased. GDS intended Verify to become self-funding: government funding would end in Spring 2020, at which point the companies would take over responsibility. GDS was confident that the IdPs would “keep investing in Verify” and would “ensure the success of the market.”

But a market failed to emerge. The government spent over £200 million on Verify and lowered its estimate of its financial benefits by 75%. Though IdPs were supposed to take over responsibility for Verify, almost every company withdrew. After April 2020, new users could register with either the (privatized) Post Office or Digidentity, the only two remaining IdPs. But the Post Office is “a ‘white-label’ version of Digidentity that runs off the same back-end identity engine.” Rather than creating a market, a monopoly effectively emerged.

This highlights the flaws of the underlying approach. Government paid to develop and maintain the software, and then paid companies to use that software. Government also bore most of the risk: companies could enter the scheme, be paid tens of millions, then withdraw if the service proved less profitable than expected, without having invested in building or maintaining the infrastructure. This is reminiscent of the UK government’s decision to bear the costs of maintaining railway tracks while having private companies profit from running trains on these tracks. Government effectively subsidizes profit.

GDS had been founded as a response to failings in outsourcing government-IT: instead of procuring overpriced technologies, GDS would write software itself. But this prioritization of in-house development was combined with an ideological notion that government technologists’ role is to “jump-start and encourage private sector investment” and to build digital infrastructure while relying on the market to deliver services using that infrastructure. This ideal of marketizing the digital state represents a new “orthodoxy” for digital government; the National Audit Office has highlighted the lack of “evidence underpinning GDS’s assumptions that a move to a private sector-led model [was] a viable option for Verify.”

These assumptions are particularly troubling here, as identity verification is an essential moment within state-to-individual interactions. Companies were positioned between government and individuals, and effectively became gatekeepers. An individual trying to access an online government service was disrupted, as she was redirected and required to go through a company. Equal access to services was splintered into a choice of corporate gateways.

This is significant as the rate of successful identity verifications through Verify hovered around 40-50%, meaning over half of attempts to access online government services failed. More worryingly, the verification rate depended on users’ demographic characteristics, with only 29% of Universal Credit (welfare benefits) claimants able to use Verify. If claimants were unable to prove their identity to the system, their benefits applications were often delayed. They had to wait longer to access payments to which they were entitled by right. Indeed, record numbers of claimants have been turning to food banks while they wait for their first payment. It is especially important to question the assumption that a company needed to be inserted between individuals and government services when the stakes – namely further deprivation, hunger, and devastating debt – are so high.

Verify’s replacement became inevitable, with only two IdPs remaining. Indeed, the government is now moving ahead with a new digital identity framework prototype. This arose from a consultation which focused on “enabling the use of digital identity in the private sector” and fostering and managing “the digital identity market.” A Cabinet Office spokesperson has stated that this framework is intended to work “for government and businesses.”

The government appears to be pushing on with the same model, despite recurrent warning signs throughout the Verify story. As the government’s digital transformation continues, it is vital that the assumptions underlying this marketization of the digital state are fundamentally questioned.

March 30, 2021. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Fearing the future without romanticizing the past: the role for international human rights law(yers) in the digital welfare state to be

TECHNOLOGY & HUMAN RIGHTS

Fearing the future without romanticizing the past: the role for international human rights law(yers) in the digital welfare state to be

Universal Credit is one of the foremost examples of a digital welfare system and the UK’s approach to digital government is widely copied. What can we learn from this case study for the future of international human rights law in the digital welfare state?

Last week, Victoria Adelmant and I organized a two-day workshop on digital welfare and the international rule and role of law, which was part of a series curated by Edinburgh Law School. While zooming in on Universal Credit (UC) in the United Kingdom, arguably one of the most developed digital welfare systems in the world, our objective was broader: namely to imagine how and why law, especially international human rights law, does and should play a role when the state goes digital. Below are some initial and brief reflections on the rich discussions we had with close to 50 civil servants, legal scholars, computer scientists, digital designers, philosophers, welfare rights practitioners, and human rights lawyers.

What is “digital welfare?” There is no agreed upon definition. At the end of a United Nations country visit to the UK in 2018, where I accompanied the UN Special Rapporteur on extreme poverty and human rights, we coined the term by writing that “a digital welfare state is emerging”. Since then, I have spent years researching and advocating around these developments in the UK and elsewhere. For me, the term digital welfare can be (imperfectly) defined as a welfare system in which the interaction with beneficiaries and internal government operations is reliant on various digital technologies.

In UC, that means you apply for and maintain your benefits online, your identity is verified online, your monthly benefits calculation is automated in real-time, fraud detection happens with the help of algorithmic models, etc. Obviously, this does not mean there is no human interaction or decision-making in UC. And the digitalization of the welfare state did not start yesterday either; it is a process many decades in the making. For example, a 1967 book titled The Automated State mentions the Social Security Administration in the United States as having “among the most extensive second-generation computer systems.” Today, digitalization is no longer just about data centers or government websites, and systems like UC exemplify how digital technologies affect each part of the welfare state.

So, what are some implications of digital welfare for the role of law, especially for international human rights law?

First, as was pointed out repeatedly in the workshop, law has not disappeared from the digital welfare state altogether. Laws and regulations, government lawyers, welfare rights advisors, and courts are still relevant. As for international human rights law, it is no secret that its institutionalization by governments, especially where it comes to economic and social rights, has never been perfect. And neither should we romanticize the past by imagining a previous law and rules-based welfare state as a rule of law utopia. I was reminded of this recently when I watched a 1975 documentary by Frederick Wiseman about a welfare office in downtown Manhattan which was far from utopian. Applying law and rights to the welfare state has been a long and continuous battle.

Second, while there is much to fear about digitalization, we shouldn’t lose sight of its promises for the reimagination of a future welfare state. Several workshop participants emphasized the potential user-friendliness and rationality that digital systems can bring. For example, the UC system quickly responded to a rise in unemployment caused by the pandemic, while online application systems for unemployment benefits in the United States crashed. Welfare systems also have a long history of bureaucratic errors. Automation offers, at least in theory, a more rational approach to government. Such digital promises, however, are only as good as the political impetus that drives digital reform, which is often more focused on cost-savings, efficiency, and detecting supposedly ubiquitous benefit fraud than truly making welfare more user-friendly and less error-prone.

What role does law play in the future digital welfare state? Several speakers emphasized a previous approach to the delivery of welfare benefits as top-down (“waterfall”). Legislation would be passed, regulations would be written and then implemented by the welfare bureaucracy as a final step. Not only is delivery now taking place digitally, but such digital delivery follows a different logic. Digital delivery has become “agile,” “iterative,” and “user-centric,” creating a feedback loop between legislation, ministerial rules and lower-level policy-making, and implementation. Implementation changes fast and often (we are now at UC 167.0).

It is also an open question what role lawyers will play. Government lawyers are changing primary social security legislation to make it fit the needs of digital systems. The idea of ‘Rules as Code’ is gaining steam and aims to produce legislation while also making sure it is machine-readable to support digital delivery. But how influential are lawyers in the overall digital transformation? While digital designers are crucial actors in designing digital welfare, lawyers may increasingly be seen as “dinosaurs,” slightly out of place when wandering into technologist-dominated meetings with post-it notes, flowcharts, and bouncy balls. Another “dinosaur” may be the “street-level bureaucrat.” Such bureaucrats have played an important role in interpreting and individualizing general laws. Yet, they are also at risk of being side-lined by coders and digital designers who increasingly shape and form welfare delivery and thereby engage in their own form of legal interpretation.

Most importantly, from the perspective of human rights: what happens to humans who have to interact with the digital welfare state? In discussions about digital systems, they are all too easily forgotten. Yet, there is substantial evidence of the human harm that may be inflicted by digital welfare, including deaths. While many digital transformations in the welfare state are premised on the methodology of “user-centered design,” its promise is not matched by its practice. Maybe the problem starts with conceptualizing human beings as “users,” but the shortcomings go deeper and include a limited mandate for change and interacting only with “users” who are already digitally visible.

While there is every reason to fear the future of digital welfare states, especially if developments turn toward lawlessness, such fear does not have to lead to outright rejection. Like law, digital systems are human constructs, and humans can influence their shape and form. The challenge for human rights lawyers and others is to imagine not only how law can be injected into digital welfare systems, but how such systems can be built on and can embed the values of (human rights) law. Whether it is through expanding the concept and practice of “user-centered design” or being involved in designing rights-respecting digital welfare platforms, (human rights) lawyers need to be at the coalface of the digital welfare state.

March 23, 2021. Christiaan van Veen, Director of the Digital Welfare State and Human Rights Project (2019-2022) at the Center for Human Rights and Global Justice at NYU School of Law.

Locked In! How the South African Welfare State Came to Rely on a Digital Monopolist

TECHNOLOGY & HUMAN RIGHTS

Locked In! How the South African Welfare State Came to Rely on a Digital Monopolist

The South African Social Security Agency provides “social grants” to 18 million citizens. In using a single private company with its own biometric payment system to deliver grants, the state became dependent on a monopolist and exposed recipients to debt and financial exploitation.

On February 24, 2021, the Digital Welfare State and Human Rights Project hosted the fifth event in their “Transformer States” conversation series, which focuses on the human rights implications of the emerging digital state. In this conversation, Christiaan Van Veen and Victoria Adelmant explored the impacts of outsourcing at the heart of South Africa’s social security system with Lynette Maart, the National Director of the South African human rights organization The Black Sash. This blog summarizes the conversation and provides the event recording and additional readings below.

Delivering the right to social security

Section 27(1)(c) of the 1996 South African Constitution guarantees everyone the “right to have access” to social security. In the early years of the post-Apartheid era, the country’s nine provincial governments administered social security grants to fulfill this constitutional social right. In 2005, the South African Social Security Agency (SASSA) was established to consolidate these programs. The social grant system has expanded significantly since then, with about 18 million of South Africa’s roughly 60 million citizens receiving grants. The system’s growth and coverage has been a source of national pride. In 2017, the Constitutional Court remarked that the “establishment of an inclusive and effective program of social assistance” is “one of the signature achievements” of South Africa’s constitutional democracy.

Addressing logistical challenges through outsourcing

Despite SASSA’s progress in expanding the right to social security, its grant programs remain constrained by the country’s physical, digital, and financial infrastructure. Millions of impoverished South Africans live in rural areas lacking proper access to roads, telecommunications, internet connectivity, or banking, which makes the delivery of cash transfers difficult and expensive. Instead of investing in its own cash transfer delivery capabilities, SASSA awarded an exclusive contract in 2012 to Cash Paymaster Services (CPS), a subsidiary of South African technology company to administer all of SASSA’s cash transfers nationwide. This made CPS a welfare delivery monopolist overnight.

SASSA selected CPS in large part because its payment system, which included a smart card with an embedded fingerprint-based chip, could reach the poorest and most remote parts of the country. To obtain a banking license, CPS partnered with Grindrod Bank and opened 10 million new bank accounts for SASSA recipients. Cash transfers could be made via the CPS payment system to smart cards without the need for internet or electricity. CPS rolled out a network of 10,000 places where social grant payments could be withdrawn, known as “paypoints,” nationwide. Recipients were never further than 5km from a paypoint.

Thanks to its position as sole deliverer of SASSA grants and its autonomous payment system, CPS also had unique access to the financial data of millions of the poorest South Africans. Other Net1 subsidiaries including Moneyline (a lending group), Smartlife (a life insurance provider) and Manje Mobile (a mobile money service) were able to exploit this “customer base” to cross-sell services. Net1 subsidiaries were soon marketing loans, insurance, and airtime to SASSA recipients. These “customers” were particularly attractive because fees could be automatically deducted from the SASSA grants the very moment they were paid on CPS’ infrastructure. Recipients became a lucrative, practically risk-free market for lenders and other service providers due to these immediate automatic deductions from government transfers. The Black Sash has found that women were going to paypoints at 4.30am in their pajamas to try to withdraw their grants before deductions left them with hardly any of the grant left.

Through its “Hands off Our Grants” advocacy campaign, the Black Sash showed that these deductions were often unauthorized and unlawful. Lynette told the story of Ma Grace, an elderly pensioner who was sold airtime even though she did not own a mobile phone, and whose avenues to recourse were all but blocked off. She explained that telephone helplines were not free but required airtime (which poor people often did not have), and that they “deflected calls” and exploited language barriers to ensure customers “never really got an answer in the language of their choice.”

“Lockin” and the hollowing out of state capacity

Net1’s exploitation of SASSA beneficiaries is only part of the story. This is also about multidimensional governmental failure stemming from SASSA’s outright dependence on CPS. As academic Keith Breckenridge has written, the Net1/SASSA relationship involves “vendor lockin,” a situation in which “the state must confront large, perhaps unsustainable, switching costs to break free of its dependence on the company for grant delivery and data processing.” There are at least three key dimensions of this lockin dynamic which were explored in the conversation:

  • SASSA outsourced both cash transfer delivery and program oversight to CPS. CPS’s “foot soldiers” wore several hats: the same person might deliver grant payments at paypoints, field complaints as local SASSA representatives, and sell loans or airtime. Commercial activity and benefits delivery were conflated.
  • The program’s structure resulted in acute regulatory failures. Because CPS (not Grindrod Bank) ultimately delivered SASSA funds to recipients via its payment infrastructure outside the National Payment System, the payments were exempt from normal oversight by banking regulators. Accordingly, the regulators were blind to unauthorized deductions by Net1 subsidiaries from recipients’ payments.
  • SASSA was entirely reliant on CPS and unable to reach its own beneficiaries itself. Though the Constitutional Court declared SASSA’s 2012 contract with CPS unconstitutional due to irregularities in the procurement process, it ruled that the contract should continue as SASSA could not yet deliver the grants without CPS. In 2017, Net1 co-founder and former CEO Serge Belamant boasted that SASSA would “need to use pigeons” to deliver social grants without CPS. While this was an exaggeration, when SASSA finally transitioned to a partnership with the South African Post Office in 2018, it had to reduce the number of paypoints from 10,000 to 1,740. As Lynette observed, SASSA now has a weaker footprint in rural areas. Therefore, rural recipients “bear the costs of transport and banking fees in order to withdraw their own money.”

This story of SASSA, CPS, and social security grants in South Africa shows not only how outsourced digital delivery of welfare can lead to corporate exploitation and stymied access to social rights, but also how reliance on private technologies can induce “lockin” that undermines the state’s ability to perform basic and vital functions. As the Constitutional Court stated in 2017, the exclusive contract between SASSA and CPS led to a situation in which “the executive arm of government admits that it is not able to fulfill its constitutional and statutory obligations to provide for the social assistance of its people.”

March 11, 2021. Adam Ray, JD program, NYU School of Law; Human Rights Scholar with the Digital Welfare State & Human Rights Project in 2020. He holds a Masters degree from Yale University and previously worked as the CFO of Songkick.

Putting Profit Before Welfare: A Closer Look at India’s Digital Identification System

TECHNOLOGY & HUMAN RIGHTS

Putting Profit Before Welfare: A Closer Look at India’s Digital Identification System 

Aadhaar is the largest national biometric digital identification program in the world, with over 1.2 billion registered users. While the poor have been used as a “marketing strategy” for this program, the “real agenda” is the pursuit of private profit.

Over the past months, the Digital Welfare State and Human Rights Project’s “Transformer States” conversations have highlighted the tensions and deceits that underlie attempts by governments around the world to digitize welfare systems and wider attempts to digitize the state. On January 27, 2021, Christiaan van Veen and Victoria Adelmant explored the particular complexities and failures of Aadhaar, India’s digital identification system, in an interview with Dr. Usha Ramanathan, a recognized human rights expert.

What is Aadhaar?

Aadhaar is the largest national digital identification program in the world; over 1.2 billion Indian residents are registered and have been given unique Aadhaar identification numbers. In order to create an Aadhaar identity, individuals must provide biometric data including fingerprints, iris scans, facial photographs, and demographic information including name, birthdate and address. Once an individual is set up in the Aadhaar system (which can be complicated depending on whether the individual’s biometric data can be gathered easily, where they live and their mobility), they can use their Aadhaar number to access public and, increasingly, private services. In many instances, accessing food rations, opening a bank account, and registering a marriage all require an individual to authenticate through Aadhaar. Authentication is mainly done by scanning one’s finger or iris, though One-Time Passcodes or QR codes can also be used.

The welfare “façade”

Unique Identification Authority of India (UIDAI) is the government agency responsible for administering the Aadhaar system. Its vision, mission, and values include empowerment, good governance, transparency, efficiency, sustainability, integrity and inclusivity. UIDAI has stated that Aadhaar is intended to facilitate “inclusion of the underprivileged and weaker sections of the society and is therefore a tool of distributive justice and equality.” Like many of the digitization schemes examined in the Transformer States series, the Aadhaar project promised all Indians formal identification that would better enable them to access welfare entitlements. In particular, early government statements claimed that many poorer Indians did not have any form of identification, therefore justifying Aadhaar as a way for them to access welfare. However, recent research suggests that less than 0.03% of Indian residents did not have formal identification such as birth certificates.

Although most Indians now have an Aadhaar “identity,” the Aadhaar system fails to live up to its lofty promises. The main issues preventing Indians from effectively claiming their entitlements are:

  • Shifting the onus of establishing authorization and entitlement onto citizens. A system that is supposed to make accessing entitlements and complying with regulations “straightforward” or “efficient” often results in frustrating and disempowering rejections or denials of services. The government asserts that the system is “self-cleaning,” which means that individuals have to fix their identity record themselves. For example, they must manually correct errors in their name or date of birth, despite not always having resources to do so.
  • Concerns with biometrics as a foundation for the system. When the project started, there was limited data or research on the effectiveness of biometric technologies for accurately establishing identity in the context of developing countries. However, the last decade of research reveals that biometric technologies do not work well in India. It can be impossible to reliably provide a fingerprint in populations with a substantial proportion of manual laborers and agricultural workers, and in hot and humid environments. Given that biometric data is used for both enrolment and authentication, these difficulties frustrate access to essential services on an ongoing basis.

Given these issues, Usha expressed concern that the system, initially presented as a voluntary program, is now effectively compulsory for those who depend on the state for support.

Private motives against the public good

The Aadhaar system is therefore failing the very individuals it was purported to be designed to help. The poorest are used as a “marketing strategy,” but it is clear that private profit is, and always was, the main motivation. From the outset, the Aadhaar “business model” would benefit private companies by growing India’s “digital economy” and creating a rich and valuable dataset. In particular, it was envisioned that the Aadhaar database could be used by banks and fintech companies to develop products and services, which further propelled the drive to get all Indians onto the database. Given the breadth and reach of the database, it is an attractive asset to private enterprises for profit-making and is seen as providing the foundation for the creation of an “Indian Silicon Valley.” Tellingly, the acronym “KYC,” used by UIDAI to assert that Aadhaar would help the government “know your citizen” is now understood as “know your customer.”

Protecting the right to identity

The right to identity cannot be confused with identification. Usha notes that “identity is complex and cannot be reduced to a number or a card,” because doing so empowers the data controller or data system to effectively choose whether to recognize the person seeking identification, or to “paralyse” their life by rejecting, or even deleting, their identification number. History shows the disastrous effects of using population databases to control and persecute individuals and communities, such as during the Holocaust and the Yugoslav Wars. Further, risks arise from the fact that identification systems like Aadhaar “fix” a single identity for individuals. Parts of a person’s identity that they may wish to keep separate—for example, their status as a sex worker, health information, or socio-economic status—are combined in a single dataset and made available in a variety of contexts, even if that data may be outdated, irrelevant, or confidential.

Usha concluded that there is a compelling need to reconsider and redraw attempts at developing universal identification systems to ensure they are transparent, democratic, and rights-based. They must, from the outset, prioritize the needs and welfare of people over claims of “efficiency,” which in reality, have been attempts to obtain profit and control.

February 15, 2021. Holly Ritson, LLM program, NYU School of Law; and Human Rights Scholar with the Digital Welfare State and Human Rights Project.

On the Frontlines of the Digital Welfare State: Musings from Australia

TECHNOLOGY & HUMAN RIGHTS

On the Frontlines of the Digital Welfare State: Musings from Australia

Welfare beneficiaries are in danger of losing their payments to “glitches” or because they lack internet access. So why is digitization still seen as the shiny panacea to poverty?

I sit here in my local pub in South Australia using the Wi-Fi, wondering whether this will still be possible next week. A month ago, we were in lockdown, but my routine for writing required me to leave the house because I did not have reliable internet at home.

Not having internet may seem alien to many. When you are in a low-income bracket, things people take for granted become huge obstacles to navigate. This is becoming especially apparent as social security systems are increasingly digitized. Not having access to technologies can mean losing access to crucial survival payments.

A working phone with internet data is required to access the Australian social security system. Applicants must generally apply for payments through the government website, which is notorious for crashing. When the pandemic hit, millions of the newly-unemployed were outraged that they could not access the website. Those of us already receiving payments just smiled wryly; we are used to this. We are told to use the website, but then it crashes, so we call and are put on hold for an hour. Then we get cut off and have to call back. This is normal. You also need a phone to fulfill reporting obligations. If you don’t have a working phone, or your battery dies, or your phone credit runs out, your payment can be suspended through the assumption that you’re deliberately shirking your reporting obligations.

In the last month, I was booted off my social security disability employment service. Although I had a certified disability affecting my job-seeking ability, the digital system had unceremoniously dumped me onto the regular job-seeking system, which punishes people for missing appointments. Unfortunately, the system had “glitched,” a popular term used by those in power for when payment systems fail. After narrowly missing a scheduled phone appointment, my payment was suspended indefinitely. Phone calls of over an hour didn’t resolve it; I didn’t even get to speak to a person, who could have resolved the issue. This is the danger of trusting digital technology above humans.

This is also the huge flaw in Income Management (IM), the “banking system” through which social security payments are controlled. I put “banking system” in quotation marks because it’s not run by a bank; there are none of the consumer protections of financial institutions, nor the choice to move if you’re unhappy with the service. The cashless welfare card is a tool for such IM: beneficiaries on the card can only withdraw 20% of their payment as cash, and the card restricts how the remaining 80% can be spent (for example, purchases of alcohol and online retailers like eBay are restricted). IM was introduced in certain rural areas of Australia deemed “disadvantaged” by the government.

The cashless welfare card is operated by Indue, a company contracted by the Australian government to administer social security payments. This is not a company with a good reputation for dealing with vulnerable populations. It is a monolith that is almost impossible to fight. Indue’s digital system can’t recognize rent cycles, meaning after a certain point in the month, the ‘limit’ for rent can be reached and a rent debit rejected. People have had to call and beg Indue to let them pay their landlords; others have been made homeless when the card stopped them from paying rent. They are stripped of agency over their own lives. They can’t use their own payments for second-hand school uniforms, or community fêtes, or buying a second-hand fridge. When you can’t use cash, avenues of obtaining cheaper goods are blocked off.

Certain politicians tout the cashless welfare card as a way to stop the poor from spending on alcohol and drugs. In reality, the vast majority affected by this system have no such problems with addiction. But when you are on the card, you are automatically classified as someone who cannot be trusted with your own money; an addict, a gambler, a criminal.

Politicians claim it’s like any other card, but this is a lie. It makes you a pariah in the community and is a tacit license for others to judge you. When you are at the whim and mercy of government policy, when you are reliant on government payments controlled by a third party, you are on the outside looking in. You’re automatically othered; you’re made to feel ashamed, stupid, and incapable.

Beyond this stigma, there are practical issues too. The cashless welfare card system assumes you have access to a smartphone and internet to check your account balance, which can be impossible for those with low incomes. Pandemic restrictions close the pubs, universities, cafes, and libraries which people rely on for internet access. Those without access are left by the wayside. “Glitches” are also common in Indue accounts: money can go missing without explanation. This ruins account-holders’ plans and forces them to waste hours having non-stop arguments with brick-wall bureaucracy and faceless people telling them they don’t have access to their own money.

Politicians recently had the opportunity to reject this system of brutality. The “Cashless Welfare Card trials” were slated to end on December 31, 2020, and a bill was voted on to determine if these “trials” would continue. The people affected by this system already told politicians how much it ruins their lives. Once again, they used their meager funds to call politicians’ offices and beg them to see the hell they’re experiencing. They used their internet data to email and rally others to do the same. I personally delivered letters to two politicians’ offices, complete with academic studies detailing the problems with IM. For a split second, it seemed like the politicians listened and some even promised to vote to end the trials. But a last-minute backroom deal meant that these promises were broken. Lived experiences of welfare recipients did not matter.

The global push to digitize welfare systems must be interrogated. When the most vulnerable in society are in danger of losing their payments to “glitches” or because they lack internet access, it begs the question: why is digitization still seen as the shiny panacea to poverty?

February 1, 2021. Nijole Naujokas, an Australian activist and writer who is passionate about social justice rights for the vulnerable. She is the current Secretary of the Australian Unemployed Workers’ Union, and is doing her Bachelor of Honors in Creative Writing at The University of Adelaide.

CSOs Call for a Full Integration of Human Rights in the Deployment of Digital Identification Systems

TECHNOLOGY AND HUMAN RIGHTS

CSOs Call for a Full Integration of Human Rights in the Deployment of Digital Identification Systems

The Principles on Identification for Sustainable Development (the Principles), the creation of which was facilitated by the World Bank’s Identification for Development (ID4D) initiative in 2017, provide one of the few attempts at global standard-setting for the development of digital identification systems across the world. They are endorsed by many global and regional organizations (the “Endorsing Organizations”) that are active in funding, designing, developing, and deploying digital identification programs across the world, especially in developing and less developed countries.

Digital identification programs are coming up across the world in various forms, and will have long term impacts on the lives and the rights of the individuals enrolled in these programs. Engagement with civil society can help ensure the lived experience of people affected by these identification programs inform the Principles and the practices of International Organizations. 

Access Now, Namati, and the Open Society Justice Initiative co-organized a Civil Society Organization (CSO) consultation in August 2020 that brought together over 60 civil society organizations from across the world for dialogue with the World Bank’s ID4D Initiative and Endorsing Organizations. The consultation occurred alongside the first review and revision of the Principles, which has been led by the Endorsing Organizations during 2020. 

The consultation provided a platform for civil society feedback towards revisions to the Principles as well as dialogue around the roles of International Organizations (IOs) and Civil Society Organizations in developing rights-respecting digital identification programs. 

This new civil society-drafted report presents a summary of the top-level comments and discussions that took place in the meeting, including recommendations such as: 

  1. There is an urgent need for human rights criteria to be recognized as a tool for evaluation and oversight of existing and proposed digital identification systems, including throughout the Principles document 
  2. Endorsing Organizations should commit to the application of these Principles in practice, including an affirmation that their support will extend only with identification programs that align with the Principles 
  3. CSOs need to be formally recognized as partners with governments and corporations in designing and implementing digital identification systems, including greater country-level engagement with CSOs from the earliest stages of potential digital identification projects through to monitoring ongoing implementation
  4. Digital identification systems across the globe are already being deployed in a manner that enables repression through enhanced censorship, exclusion, and surveillance, but centering transparent and democratic processes as drivers of the development and deployment of these systems can mitigate these and other risks

Following the consultation and in line with this new report, we welcome the opportunity to further integrate the principles of the Universal Declaration of Human Rights and other sources of human rights in international law into the Principles of Identification and the design, deployment, and monitoring of digital identification systems in practice. We encourage the establishment of permanent and formal structures for the engagement of civil society organizations in global and national-level processes related to digital identification, in order to ensure identification technologies are used in service of human agency and dignity and to prevent further harms in the exercise of fundamental rights in their deployment. 

We call on United Nations and regional human rights mechanisms, including the High Commissioner on Human Rights, treaty bodies, and Special Procedures, to take up the severe human rights risks involved in the context of digital identification systems as an urgent agenda item under their respective mandates.

We welcome further dialogue and engagement with the World Bank’s ID4D Initiative and other Endorsing Organizations and promoters of digital identification systems in order to ensure oversight and guidance towards human rights-aligned implementation of those systems.

This post was was originally published as a press release on December 17, 2020

  1. Access Now
  2. AfroLeadership
  3. Asociación por los Derechos Civiles (ADC)
  4. Collaboration on International ICT Policy for East and Southern Africa (CIPESA)
  5. Derechos Digitales
  6. Development and Justice Initiative 
  7. Digital Welfare State and Human Rights Project, Center for Human Rights and Global Justice
  8. Haki na Sheria Initiative 
  9. Human Rights Advocacy and Research Foundation (HRF)
  10. Myanmar Centre for Responsible Business (MCRB) 
  11. Namati

Statements of the Digital Welfare State & Human Rights Project do not purport to represent the views of NYU or the Center, if any.

Digital Paternalism: A Recap of our Conversation about Australia’s Cashless Debit Card with Eve Vincent

TECHNOLOGY & HUMAN RIGHTS

Digital Paternalism: A Recap of our Conversation about Australia’s Cashless Debit Card with Eve Vincent

On November 23, 2020, the Center for Human Rights and Global Justice’s Digital Welfare State and Human Rights Project hosted the third virtual conversation in its “Transformer States: A Conversation Series on Digital Government and Human Rights” series. Christiaan van Veen and Victoria Adelmant interviewed Eve Vincent, senior lecturer in the Department of Anthropology at Macquarie University and author of a crucial report on the lived experiences of one of the first Cashless Debit Card trials in Ceduna, South Australia.

The Cashless Debit Card is a debit card which is currently used in parts of Australia to deliver benefit income to welfare recipients. Vitally, it is a tool of compulsory income management: the card “quarantines” 80% of a recipient’s payment, preventing this 80% from being withdrawn as cash and blocking attempted purchases of alcohol or gambling products. It is similar to, and intensifies, a previous scheme of debit card-based income management, known as the “Basics Card.” This earlier card was introduced after a 2007 report into child sexual abuse in indigenous communities in Australia’s Northern Territory which identified alcoholism, substance abuse, and gambling as major causes of such abuse. One of the measures taken was the requirement that indigenous communities’ benefit income be received on a Basics Card which quarantined 50% of benefit payments. The Basics Card was later extended to non-indigenous welfare recipients, but it remained disproportionately targeted at indigenous communities.

Following a 2014 report by mining magnate Andrew Forrest on inequality between indigenous and non-indigenous groups in Australia, the government launched the Cashless Debit Card to gradually replace the Basics Card. The Cashless Debit Card would quarantine 80% of benefit income on the card, and the card would block spending where alcohol is sold or where gambling takes place. Initial trials were targeted, again, in remote indigenous areas. The communities in the first trials were presented as parasitic on the welfare state and in crisis with regard to alcohol abuse, assault, and gambling. It was argued that drastic intervention was warranted: the government should step in to take care of these communities as they were unable to look after themselves. Income management would assist in this paternalistic intervention, fostering responsibility and curbing alcoholism and gambling through blocking their purchases. Many of Eve’s research participants found these justifications offensive and infantilizing. The Cashless Debit Card is now being trialed in more populous areas with more non-indigenous people, and the narrative has shifted. Justifications for cards for non-indigenous people have focused more on the need to teach financial literacy and budgeting skills.

Beyond the humiliating underlying stereotypes, the Cashless Debit Card itself leads cardholders feeling stigmatized. While the non-acceptance of Basics Cards at certain shops had led to prominent “Basics Card not accepted here” signs, the Cashless Debit Card was intended to be more subtle. It is integrated with EFTPOS technology, meaning it can theoretically be used in any shop with one of these ubiquitous card-reading devices. ETPOS terminals in casinos or pubs are blocked, but these establishments can arrange with the government to have some discretion. A pub can arrange to allow Cashless Debit Card-holders to pay for food but not alcohol, for example, thereby not excluding them entirely. Despite this purported subtlety, individuals reported feeling anxious about using the card as the technology was proving unreliable and inconsistent, accepted one day but not the next. When the card was declined, sometimes seemingly randomly, this was deeply humiliating. Card-holders would have to gather their shopping and return it to the shelves under the judging gaze of others, potentially of people they know.

Separately, some card-holders had to use public computers to log into their accounts to check their cards’ balance, highlighting the reliance of such schemes on strong digital infrastructure and on individuals’ access to connected devices. But some Cashless Debit Card-holders were quite positive about the card: there is, of course, a diversity of opinions and experiences. Some found that the card’s fortnightly cycle had helped them with budgeting and thought the app upon which they could check their balance was a user-friendly and effective budgeting tool.

The Cashless Debit Card scheme is run by a company named Indue, continuing decades-long trends of outsourcing welfare delivery. Many participants in Eve’s research spoke positively of their experience with Indue, finding staff on helplines to be helpful and efficient. But many objected to the principle that the card is privatized and that profits are being made on the basis of their poverty. The Cashless Debit Card costs AUD 10,000 per participant per year to administer: many card-holders were outraged that such an expense is outlaid to try to control how they spend their very meager income. Recently, the biggest four banks in Australia and government-owned Australia Post have been in talks about taking over the management of the scheme. This raises an interesting parallel with South Africa, where social grants were originally paid through a private provider but, following a scandal regarding the tender process and the financial exploitation of poor grant recipients, public providers stepped in again.

As an anthropologist, Eve’s research takes as a starting point the importance of listening to the people affected and foregrounding their lived experience, resonating with a common approach to human rights research. Interestingly, many Cashless Debit Card-holders used the language of human rights to express indignation about the scheme and what it represents. Reminiscent of Sally Engle Merry’s work on the ‘vernacularization’ of human rights, card-holders invoked human rights in a manner quite specific to the Aboriginal Australian context and history. Eve’s research participants often compared the Cashless Debit Card trials to the past, when the wages of indigenous peoples had been stolen and their access to money was tightly controlled. They referred to that time as the “time before rights”; before legislative equal citizen rights had been gained. Today, they argued, now that indigenous communities have rights, this kind of intervention and control of communities by the government is unacceptable. As one of Eve’s research participants put it, the government has through the Cashless Debit Card “taken away our rights.”

December 4, 2020. Victoria Adelmant, Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law.