Social Credit in China: Looking Beyond the “Black Mirror” Nightmare

TECHNOLOGY & HUMAN RIGHTS

Social Credit in China: Looking Beyond the “Black Mirror” Nightmare

The Chinese government’s Social Credit program has received much attention from Western media and academics, but misrepresentations have led to confusion over what it truly entails. Such mischaracterizations unhelpfully distract from the dangers and impacts of the realities of Social Credit. On March 31, 2021, Christiaan Van Veen and I hosted the sixth event in the Transformer States conversation series, which focuses on the human rights implications of the emerging digital state. We interviewed Dr. Chenchen Zhang, Assistant Professor at Queen’s University Belfast, to explore the much-discussed but little-understood Social Credit program in China.

Though the Chinese government’s Social Credit program has received significant attention from Western media and rights organizations, much of this discussion has often misrepresented the program. Social Credit is imagined as a comprehensive, nation-wide system in which every action is monitored and a single score is assigned to each individual, much like a Black Mirror episode. This is in fact quite far from reality. But this image has become entrenched in the West, as discussions and some academic debate has focused on abstracted portrayals of what Social Credit could be. In addition, the widely-discussed voluntary, private systems run by corporations, such as Alipay’s Sesame Credit or Tencent’s WeChat score, are often mistakenly conflated with the government’s Social Credit program.

Jeremy Daum has argued that these widespread misrepresentations of Social Credit serve to distract from examining “the true causes for concern” within the systems actually in place. They also distract from similar technological developments occurring in the West, which seem acceptable by comparison. An accurate understanding is required to acknowledge the human rights concerns that this program raises.

The crucial starting point here is that the government’s Social Credit system is a heterogeneous assemblage of fragmented and decentralized systems. Central government, specific government agencies, public transport networks, municipal governments, and others are experimenting with diverse initiatives with different aims. Indeed, xinyong, the term which is translated as “credit” in Social Credit, encompasses notions of financial creditworthiness, regulatory compliance, and moral trustworthiness, therefore covering programs with different visions and narratives. A common thread across these systems is a reliance on information-sharing and lists to encourage or discourage certain behaviors, including blacklists to “shame” wrongdoers and “redlists” publicizing those with a good record.

One national-level program called the Joint Rewards and Sanctions mechanism shares information across government agencies about companies which have violated regulations. Once a company is included on one agency’s blacklist for having, for example, failed to pay migrant workers’ wages, other agencies may also sanction that company and refuse to grant it a license or contract. But blacklisting mechanisms also affect individuals: the People’s Court of China maintains a list of shixin (dishonest) people who default on judgments. Individuals on this list are prevented from accessing “non-essential consumption” (including travel by plane or high-speed train) and their names are published, adding an element of public shaming. Other local or sector-specific “credit” programs aim at disciplining individual behavior: anyone caught smoking on the high-speed train is placed on the railway system’s list of shixin persons and subjected to a 6-month ban from taking the train. Localized “citizen scoring” schemes are also being piloted in a dozen cities. Currently, these resemble “club membership” schemes with minor benefits and have low sign-up rates; some have been very controversial. In 2019, in response to controversies, the National Development and Reform Commission issued guidelines stating that citizen scores must only be used for incentivizing behavior and not as sanctions or to limit access to basic public services. Presently, each of the systems described here are separate from one another.

But even where generalizations and mischaracterizations of Social Credit are dispelled, many aspects nonetheless raise significant concerns. Such systems will, of course, worsen issues surrounding privacy, chilling effects, discrimination, and disproportionate punishment. These have been explored at length elsewhere, but this conversation with Chenchen raised additional important issues.

First, a stated objective behind the use of blacklists and shaming is the need to encourage compliance with existing laws and regulations, since non-compliance undermines market order. This is not a unique approach: the US Department of Labor names and shames corporations that violate labor laws, and the World Bank has a similar mechanism. But the laws which are enforced through Social Credit exist in and constitute an extremely repressive context, and these mechanisms are applied to individuals. An individual can be arrested for protesting labor conditions or for speaking about certain issues on social media, and systems like the People’s Court blacklist amplify the consequences of these repressive laws. Mechanisms which “merely” seek to increase legal compliance are deeply problematic in this context.

Second, as with so many of the digital government initiatives discussed in the Transformer States series, Social Credit schemes exhibit technological solutionism which invisibilizes the causes of the problems they seek to address. Non-payment of migrant workers’ wages, for example, is a legitimate issue which must be tackled. But in turning to digital solutions such as an app which “scores” firms based on their record of wage payments, a depoliticized technological fix is promised to solve systemic problems. In the process, it obscures the structural reasons behind migrant workers’ difficulties in accessing their wages, including a differentiated citizenship regime that denies them equal access to social provisions.

Separately, there are disparities in how individuals in different parts of the country are affected by Social Credit. Around the world, governments’ new digital systems are consistently trialed on the poorest or most vulnerable groups: for example, smartcard technology for quarantining benefit income in Australia was first introduced within indigenous communities. Similarly, experimentation with Social Credit systems is unequally targeted, especially on a geographical basis. There is a hierarchy of cities in China with provincial-level cities like Beijing at the top, followed by prefectural-level cities, county-level cities, then towns and villages. A pattern is emerging whereby smaller or “lower-ranked” cities have adopted more comprehensive and aggressive citizen scoring schemes. While Shanghai has local legislation that defines the boundaries of its Social Credit scheme, less-known cities seeking to improve their “branding” are subjecting residents to more arbitrary and concerning practices.

Of course, the biggest concern surrounding Social Credit relates to how it may develop in the future. While this is currently a fragmented landscape of disparate schemes, the worry is that these may be consolidated. Chenchen stated that a centralized, nationwide “citizen scoring” system remains unlikely and would not enjoy support from the public or the Central Bank which oversees the Social Credit program. But it is not out of the question that privately-run schemes such as Sesame Credit might eventually be linked to the government’s Social Credit system. Though the system is not (yet) as comprehensive and coordinated as has been portrayed, its logics and methodologies of sharing ever-more information across siloes to shape behaviors may well push in this direction, in China and elsewhere.

April 20, 2021. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Everyone Counts! Ensuring that the human rights of all are respected in digital ID systems

TECHNOLOGY & HUMAN RIGHTS

Everyone Counts! Ensuring that the human rights of all are respected in digital ID systems

The Everyone Counts! initiative was launched in the fall of 2020 with a firm commitment to a simple principle: the digital transformation of the state can only qualify as a success if everyone’s human rights are respected. Nowhere is this more urgent than in the context of so-called digital ID systems.

Research, litigation and broader advocacy on digital ID in countries like India and Kenya has already revealed the dangers of exclusion from digital ID for ethnic minority groups[1] and for people living in poverty.[2] However, a significant gap still exists between the magnitude of the human rights risks involved and the urgency of research and action on digital ID in many countries. Despite their active promotion and use by governments, international organizations and the private sector, in many cases we simply do not know how these digital ID systems lead to social exclusion and human rights violations, especially for the poorest and most marginalized.

Therefore, the Everyone Counts! initiative aims to engage in both research and action to address social exclusion and related human rights violations that are facilitated by government-sponsored digital ID systems.

Does the emperor have new clothes? The yawning evidence gap on digital ID

The common narrative behind the rush towards digital ID systems, especially in the Global South, is by now familiar: “As many as 1 billion people across the world do not have basic proof of identity, which is essential for protecting their rights and enabling access to services and opportunities.”[3] Digital ID is presented as a key solution to this problem, while simultaneously promising lower income countries the opportunity to “leapfrog” years of development via digital systems that assist in “improving governance and service delivery, increasing financial inclusion, reducing gender inequalities by empowering women and girls, and increasing access to health services and social safety nets for the poor.”[4]

This perspective, for which the World Bank and its Identification for Development (ID4D) Initiative have become the official “anchor” internationally, presents digital ID systems as a force for good. The Bank acknowledges that exclusionary issues may arise, but is confident that such issues may be overcome through good intentions and safeguards. Digging underneath the surface of these confident assertions, however, one finds that there appears to be remarkably little research into the overall impact of digital ID systems on social exclusion and a range of related human rights. For instance, after entering the digital ID space in 2014, publishing prolifically, and guiding billions of development dollars into furthering this agenda, the World Bank’s ID4D team concedes in its 2020 Annual Report that “given that this topic is relatively new to the development agenda, empirical research that rigorously evaluates the impact of ID systems on development outcomes and the effectiveness of strategies to mitigate risks has been limited.”[5] In other words, despite warning signs from several countries around the world, including chilling stories of people who have died because they were shut out of biometric ID systems,[6] the digital ID agenda moves full steam ahead without full understanding of its exclusionary potential.

Making sure that everyone truly counts

While the Everyone Counts! initiative only has a fraction of the resources of ID4D, we hope to inject some much needed reality into this discourse through our work. We will do this by undertaking–together with research partners in different countries–empirical human rights research that investigates how the introduction of a digital ID system leads to or exacerbates social exclusion. For example, we are currently undertaking a joint research project with Ugandan research partners focused on Uganda’s digital ID system, Ndaga Muntu, and its impact on poor women’s right to health, and older persons’ right to social assistance.

Our presence at a leading university and law school underlines our commitment to high quality and cutting-edge research, but we are not in the business of knowledge accumulation purely for its own sake. We will aim to transform our research into action. This could come in the form of strategic litigation and advocacy, such as the work by our partners described below, or in the form of network building and information sharing. For instance, together with co-sponsors like the UN Economic Commission for Africa (UNECA) and the Open Society Justice Initiative (OSJI), we are hosting a workshop series for African civil society organizations on digital ID and exclusion. The series creates a space where activists hoping to resist the exclusion associated with digital ID can come together, gain access to tools, information and networks, and form a community of practice that facilitates further activism.

Ensuring non-discriminatory access to vaccines: An early case study 

A recent example from Uganda demonstrates just how effective targeted action against digital ID systems can be. The government began rollout of its national digital ID system Ndaga Muntu as early as 2015, and it has gradually become a mandatory requirement to access a range of social services in Uganda.

To address the threat of COVID-19, the Ugandan government recently began a free, national vaccine program. One of the groups eligible to receive the vaccine would be all adults over the age of 50. On March 2, however, the Ugandan Minister of Health announced that only those Ugandan citizens who could produce a Ndaga Muntucard, or at least a national ID number (NIN), would be able to receive the vaccine. Conservative estimates suggest that over 7 million eligible Ugandans have not yet received their national ID card.

Our research partners, the Initiative for Social and Economic Rights (ISER) and Unwanted Witness (UW), sued the Ugandan government on March 5 to challenge the mandatory requirement of the Ndaga Muntu.[7] They argued that not only would the requirement of the national ID in exclude millions of eligible older persons from receiving the vaccine, but also that it would set a dangerous precedent that would allow for further discrimination in other areas of social services.[8]

On March 9, the Ministry of Health announced that it would change the national ID requirement so that alternative forms of identification documents, which are much more accessible to poor Ugandans, could be used to access the COVID-19 vaccine.[9] This was a critical victory for the millions of Ugandans who seek access to the life-saving vaccine–but it is also a warning sign of the subtle and pernicious ways that the digital ID system may be used to exclude.

Humans first, not systems first

The Ugandan case study shows the urgent need for the human rights movement to engage in discussions about digital transformation so that fundamental rights are not lost in the rush to build a “modern, digital state.” In our work on this initiative, we will remain similarly committed to prioritizing how individual human beings are affected by digital ID systems. Listening to their stories, understanding the harms they experience, and channeling their anger and frustration to other, more privileged and powerful audiences, is our core purpose.

Digital transformation is a field prone to a utilitarian logic: “if 99% of the population is able to register for a digital ID system, we should celebrate it as a success.” Our qualitative work does not only challenge the supposed benefits for these 99%, but emphasizes that the remaining 1% equals a multitude of individual human beings who may be victimized. Our research so far has only confirmed our intuition that digital ID systems can deliver significant harms, particularly for those who are poorest, most vulnerable, and least powerful in society. These excluded voices deserve to be heard and to become a decisive factor in deciding the shape of our digital future.

April 6, 2021. Christiaan van Veen and Katelyn Cioffi.

Christiaan van Veen, Director of the Digital Welfare State and Human Rights Project (2019-2022) at the Center for Human Rights and Global Justice at NYU School of Law. 

Katelyn Cioffi, Senior Research Scholar, Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law.

Marketizing the digital state: the failure of the ‘Verify’ model in the United Kingdom

TECHNOLOGY & HUMAN RIGHTS

Marketizing the digital state: the failure of the ‘Verify’ model in the United Kingdom

Verify, the UK government’s digital identity program, sought to construct a market for identity verification in which companies would compete. But the assumption that companies should be positioned between government and individuals who are trying to access services has gone unquestioned.

The story of the UK government’s Verify service has been told as one of outright failure and a colossal waste of money. Intended as the single digital portal through which individuals accessing online government services would prove their identity, Verify underperformed for years and is now effectively being replaced. But accounts of its demise often focus on technical failures and inter-departmental politics, rather than evaluating the underlying political vision that Verify represents. This is a vision of market creation, whereby the government constructs a market for identity verification within which private companies can compete. As Verify is replaced and the UK government’s ‘digital transformation’ continues, the failings of this model must be examined.

Whether an individual wants to claim a tax refund from Her Majesty’s Revenue and Customs, renew her driver’s license through the Driver and Vehicle Licensing Agency, or receive her welfare payment from the Department for Work and Pensions, the government’s intention was that she could prove her identity to any of these bodies through a single online platform: Verify. This was a flagship project of the Government Digital Service (GDS), a unit working across departments to lead the government’s digital transformation. Much of GDS’ work was driven by the notion of ‘government as a platform’: government should design and build “supporting infrastructure” upon which others can build.

Squarely in line with this idea, Verify provides a “platform for identity.” GDS technologists wrote the software for the Verify platform, while the government then accredits companies as ‘identity providers’ (IdPs) which ‘plug into’ the platform to compete. An individual who seeks to access a government service online will see Verify on her screen and will be prompted by Verify to choose an identity provider. She will be redirected to that IdP’s website and must enter information such as her passport number or bank details. The IdP then checks this information against public and private databases before confirming her identity to the government service being requested. The individual therefore leaves the government website to verify her identity with a separate, private entity.

As GDS “didn’t think there was a market,” it aimed to support “the development of a digital identity market that spans both public and private sectors” so that users could “use their verified identity accounts for private sector transactions as well as government services.” After Verify went live in 2016, the government accredited seven IdPs, including credit reporting agency Experian and Barclays bank. Government would pay IdPs per user, with the price per user decreasing as user volumes increased. GDS intended Verify to become self-funding: government funding would end in Spring 2020, at which point the companies would take over responsibility. GDS was confident that the IdPs would “keep investing in Verify” and would “ensure the success of the market.”

But a market failed to emerge. The government spent over £200 million on Verify and lowered its estimate of its financial benefits by 75%. Though IdPs were supposed to take over responsibility for Verify, almost every company withdrew. After April 2020, new users could register with either the (privatized) Post Office or Digidentity, the only two remaining IdPs. But the Post Office is “a ‘white-label’ version of Digidentity that runs off the same back-end identity engine.” Rather than creating a market, a monopoly effectively emerged.

This highlights the flaws of the underlying approach. Government paid to develop and maintain the software, and then paid companies to use that software. Government also bore most of the risk: companies could enter the scheme, be paid tens of millions, then withdraw if the service proved less profitable than expected, without having invested in building or maintaining the infrastructure. This is reminiscent of the UK government’s decision to bear the costs of maintaining railway tracks while having private companies profit from running trains on these tracks. Government effectively subsidizes profit.

GDS had been founded as a response to failings in outsourcing government-IT: instead of procuring overpriced technologies, GDS would write software itself. But this prioritization of in-house development was combined with an ideological notion that government technologists’ role is to “jump-start and encourage private sector investment” and to build digital infrastructure while relying on the market to deliver services using that infrastructure. This ideal of marketizing the digital state represents a new “orthodoxy” for digital government; the National Audit Office has highlighted the lack of “evidence underpinning GDS’s assumptions that a move to a private sector-led model [was] a viable option for Verify.”

These assumptions are particularly troubling here, as identity verification is an essential moment within state-to-individual interactions. Companies were positioned between government and individuals, and effectively became gatekeepers. An individual trying to access an online government service was disrupted, as she was redirected and required to go through a company. Equal access to services was splintered into a choice of corporate gateways.

This is significant as the rate of successful identity verifications through Verify hovered around 40-50%, meaning over half of attempts to access online government services failed. More worryingly, the verification rate depended on users’ demographic characteristics, with only 29% of Universal Credit (welfare benefits) claimants able to use Verify. If claimants were unable to prove their identity to the system, their benefits applications were often delayed. They had to wait longer to access payments to which they were entitled by right. Indeed, record numbers of claimants have been turning to food banks while they wait for their first payment. It is especially important to question the assumption that a company needed to be inserted between individuals and government services when the stakes – namely further deprivation, hunger, and devastating debt – are so high.

Verify’s replacement became inevitable, with only two IdPs remaining. Indeed, the government is now moving ahead with a new digital identity framework prototype. This arose from a consultation which focused on “enabling the use of digital identity in the private sector” and fostering and managing “the digital identity market.” A Cabinet Office spokesperson has stated that this framework is intended to work “for government and businesses.”

The government appears to be pushing on with the same model, despite recurrent warning signs throughout the Verify story. As the government’s digital transformation continues, it is vital that the assumptions underlying this marketization of the digital state are fundamentally questioned.

March 30, 2021. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Fearing the future without romanticizing the past: the role for international human rights law(yers) in the digital welfare state to be

TECHNOLOGY & HUMAN RIGHTS

Fearing the future without romanticizing the past: the role for international human rights law(yers) in the digital welfare state to be

Universal Credit is one of the foremost examples of a digital welfare system and the UK’s approach to digital government is widely copied. What can we learn from this case study for the future of international human rights law in the digital welfare state?

Last week, Victoria Adelmant and I organized a two-day workshop on digital welfare and the international rule and role of law, which was part of a series curated by Edinburgh Law School. While zooming in on Universal Credit (UC) in the United Kingdom, arguably one of the most developed digital welfare systems in the world, our objective was broader: namely to imagine how and why law, especially international human rights law, does and should play a role when the state goes digital. Below are some initial and brief reflections on the rich discussions we had with close to 50 civil servants, legal scholars, computer scientists, digital designers, philosophers, welfare rights practitioners, and human rights lawyers.

What is “digital welfare?” There is no agreed upon definition. At the end of a United Nations country visit to the UK in 2018, where I accompanied the UN Special Rapporteur on extreme poverty and human rights, we coined the term by writing that “a digital welfare state is emerging”. Since then, I have spent years researching and advocating around these developments in the UK and elsewhere. For me, the term digital welfare can be (imperfectly) defined as a welfare system in which the interaction with beneficiaries and internal government operations is reliant on various digital technologies.

In UC, that means you apply for and maintain your benefits online, your identity is verified online, your monthly benefits calculation is automated in real-time, fraud detection happens with the help of algorithmic models, etc. Obviously, this does not mean there is no human interaction or decision-making in UC. And the digitalization of the welfare state did not start yesterday either; it is a process many decades in the making. For example, a 1967 book titled The Automated State mentions the Social Security Administration in the United States as having “among the most extensive second-generation computer systems.” Today, digitalization is no longer just about data centers or government websites, and systems like UC exemplify how digital technologies affect each part of the welfare state.

So, what are some implications of digital welfare for the role of law, especially for international human rights law?

First, as was pointed out repeatedly in the workshop, law has not disappeared from the digital welfare state altogether. Laws and regulations, government lawyers, welfare rights advisors, and courts are still relevant. As for international human rights law, it is no secret that its institutionalization by governments, especially where it comes to economic and social rights, has never been perfect. And neither should we romanticize the past by imagining a previous law and rules-based welfare state as a rule of law utopia. I was reminded of this recently when I watched a 1975 documentary by Frederick Wiseman about a welfare office in downtown Manhattan which was far from utopian. Applying law and rights to the welfare state has been a long and continuous battle.

Second, while there is much to fear about digitalization, we shouldn’t lose sight of its promises for the reimagination of a future welfare state. Several workshop participants emphasized the potential user-friendliness and rationality that digital systems can bring. For example, the UC system quickly responded to a rise in unemployment caused by the pandemic, while online application systems for unemployment benefits in the United States crashed. Welfare systems also have a long history of bureaucratic errors. Automation offers, at least in theory, a more rational approach to government. Such digital promises, however, are only as good as the political impetus that drives digital reform, which is often more focused on cost-savings, efficiency, and detecting supposedly ubiquitous benefit fraud than truly making welfare more user-friendly and less error-prone.

What role does law play in the future digital welfare state? Several speakers emphasized a previous approach to the delivery of welfare benefits as top-down (“waterfall”). Legislation would be passed, regulations would be written and then implemented by the welfare bureaucracy as a final step. Not only is delivery now taking place digitally, but such digital delivery follows a different logic. Digital delivery has become “agile,” “iterative,” and “user-centric,” creating a feedback loop between legislation, ministerial rules and lower-level policy-making, and implementation. Implementation changes fast and often (we are now at UC 167.0).

It is also an open question what role lawyers will play. Government lawyers are changing primary social security legislation to make it fit the needs of digital systems. The idea of ‘Rules as Code’ is gaining steam and aims to produce legislation while also making sure it is machine-readable to support digital delivery. But how influential are lawyers in the overall digital transformation? While digital designers are crucial actors in designing digital welfare, lawyers may increasingly be seen as “dinosaurs,” slightly out of place when wandering into technologist-dominated meetings with post-it notes, flowcharts, and bouncy balls. Another “dinosaur” may be the “street-level bureaucrat.” Such bureaucrats have played an important role in interpreting and individualizing general laws. Yet, they are also at risk of being side-lined by coders and digital designers who increasingly shape and form welfare delivery and thereby engage in their own form of legal interpretation.

Most importantly, from the perspective of human rights: what happens to humans who have to interact with the digital welfare state? In discussions about digital systems, they are all too easily forgotten. Yet, there is substantial evidence of the human harm that may be inflicted by digital welfare, including deaths. While many digital transformations in the welfare state are premised on the methodology of “user-centered design,” its promise is not matched by its practice. Maybe the problem starts with conceptualizing human beings as “users,” but the shortcomings go deeper and include a limited mandate for change and interacting only with “users” who are already digitally visible.

While there is every reason to fear the future of digital welfare states, especially if developments turn toward lawlessness, such fear does not have to lead to outright rejection. Like law, digital systems are human constructs, and humans can influence their shape and form. The challenge for human rights lawyers and others is to imagine not only how law can be injected into digital welfare systems, but how such systems can be built on and can embed the values of (human rights) law. Whether it is through expanding the concept and practice of “user-centered design” or being involved in designing rights-respecting digital welfare platforms, (human rights) lawyers need to be at the coalface of the digital welfare state.

March 23, 2021. Christiaan van Veen, Director of the Digital Welfare State and Human Rights Project (2019-2022) at the Center for Human Rights and Global Justice at NYU School of Law.

Locked In! How the South African Welfare State Came to Rely on a Digital Monopolist

TECHNOLOGY & HUMAN RIGHTS

Locked In! How the South African Welfare State Came to Rely on a Digital Monopolist

The South African Social Security Agency provides “social grants” to 18 million citizens. In using a single private company with its own biometric payment system to deliver grants, the state became dependent on a monopolist and exposed recipients to debt and financial exploitation.

On February 24, 2021, the Digital Welfare State and Human Rights Project hosted the fifth event in their “Transformer States” conversation series, which focuses on the human rights implications of the emerging digital state. In this conversation, Christiaan Van Veen and Victoria Adelmant explored the impacts of outsourcing at the heart of South Africa’s social security system with Lynette Maart, the National Director of the South African human rights organization The Black Sash. This blog summarizes the conversation and provides the event recording and additional readings below.

Delivering the right to social security

Section 27(1)(c) of the 1996 South African Constitution guarantees everyone the “right to have access” to social security. In the early years of the post-Apartheid era, the country’s nine provincial governments administered social security grants to fulfill this constitutional social right. In 2005, the South African Social Security Agency (SASSA) was established to consolidate these programs. The social grant system has expanded significantly since then, with about 18 million of South Africa’s roughly 60 million citizens receiving grants. The system’s growth and coverage has been a source of national pride. In 2017, the Constitutional Court remarked that the “establishment of an inclusive and effective program of social assistance” is “one of the signature achievements” of South Africa’s constitutional democracy.

Addressing logistical challenges through outsourcing

Despite SASSA’s progress in expanding the right to social security, its grant programs remain constrained by the country’s physical, digital, and financial infrastructure. Millions of impoverished South Africans live in rural areas lacking proper access to roads, telecommunications, internet connectivity, or banking, which makes the delivery of cash transfers difficult and expensive. Instead of investing in its own cash transfer delivery capabilities, SASSA awarded an exclusive contract in 2012 to Cash Paymaster Services (CPS), a subsidiary of South African technology company to administer all of SASSA’s cash transfers nationwide. This made CPS a welfare delivery monopolist overnight.

SASSA selected CPS in large part because its payment system, which included a smart card with an embedded fingerprint-based chip, could reach the poorest and most remote parts of the country. To obtain a banking license, CPS partnered with Grindrod Bank and opened 10 million new bank accounts for SASSA recipients. Cash transfers could be made via the CPS payment system to smart cards without the need for internet or electricity. CPS rolled out a network of 10,000 places where social grant payments could be withdrawn, known as “paypoints,” nationwide. Recipients were never further than 5km from a paypoint.

Thanks to its position as sole deliverer of SASSA grants and its autonomous payment system, CPS also had unique access to the financial data of millions of the poorest South Africans. Other Net1 subsidiaries including Moneyline (a lending group), Smartlife (a life insurance provider) and Manje Mobile (a mobile money service) were able to exploit this “customer base” to cross-sell services. Net1 subsidiaries were soon marketing loans, insurance, and airtime to SASSA recipients. These “customers” were particularly attractive because fees could be automatically deducted from the SASSA grants the very moment they were paid on CPS’ infrastructure. Recipients became a lucrative, practically risk-free market for lenders and other service providers due to these immediate automatic deductions from government transfers. The Black Sash has found that women were going to paypoints at 4.30am in their pajamas to try to withdraw their grants before deductions left them with hardly any of the grant left.

Through its “Hands off Our Grants” advocacy campaign, the Black Sash showed that these deductions were often unauthorized and unlawful. Lynette told the story of Ma Grace, an elderly pensioner who was sold airtime even though she did not own a mobile phone, and whose avenues to recourse were all but blocked off. She explained that telephone helplines were not free but required airtime (which poor people often did not have), and that they “deflected calls” and exploited language barriers to ensure customers “never really got an answer in the language of their choice.”

“Lockin” and the hollowing out of state capacity

Net1’s exploitation of SASSA beneficiaries is only part of the story. This is also about multidimensional governmental failure stemming from SASSA’s outright dependence on CPS. As academic Keith Breckenridge has written, the Net1/SASSA relationship involves “vendor lockin,” a situation in which “the state must confront large, perhaps unsustainable, switching costs to break free of its dependence on the company for grant delivery and data processing.” There are at least three key dimensions of this lockin dynamic which were explored in the conversation:

  • SASSA outsourced both cash transfer delivery and program oversight to CPS. CPS’s “foot soldiers” wore several hats: the same person might deliver grant payments at paypoints, field complaints as local SASSA representatives, and sell loans or airtime. Commercial activity and benefits delivery were conflated.
  • The program’s structure resulted in acute regulatory failures. Because CPS (not Grindrod Bank) ultimately delivered SASSA funds to recipients via its payment infrastructure outside the National Payment System, the payments were exempt from normal oversight by banking regulators. Accordingly, the regulators were blind to unauthorized deductions by Net1 subsidiaries from recipients’ payments.
  • SASSA was entirely reliant on CPS and unable to reach its own beneficiaries itself. Though the Constitutional Court declared SASSA’s 2012 contract with CPS unconstitutional due to irregularities in the procurement process, it ruled that the contract should continue as SASSA could not yet deliver the grants without CPS. In 2017, Net1 co-founder and former CEO Serge Belamant boasted that SASSA would “need to use pigeons” to deliver social grants without CPS. While this was an exaggeration, when SASSA finally transitioned to a partnership with the South African Post Office in 2018, it had to reduce the number of paypoints from 10,000 to 1,740. As Lynette observed, SASSA now has a weaker footprint in rural areas. Therefore, rural recipients “bear the costs of transport and banking fees in order to withdraw their own money.”

This story of SASSA, CPS, and social security grants in South Africa shows not only how outsourced digital delivery of welfare can lead to corporate exploitation and stymied access to social rights, but also how reliance on private technologies can induce “lockin” that undermines the state’s ability to perform basic and vital functions. As the Constitutional Court stated in 2017, the exclusive contract between SASSA and CPS led to a situation in which “the executive arm of government admits that it is not able to fulfill its constitutional and statutory obligations to provide for the social assistance of its people.”

March 11, 2021. Adam Ray, JD program, NYU School of Law; Human Rights Scholar with the Digital Welfare State & Human Rights Project in 2020. He holds a Masters degree from Yale University and previously worked as the CFO of Songkick.

Breaking Through the Climate Gridlock with Citizen Power

CLIMATE & ENVIRONMENT

Breaking Through the Climate Gridlock with Citizen Power

Why climate advocates are increasingly turning to citizens’ assemblies to remedy governments’ sluggishness on climate change.

Climate change protesters holding a picket sign that reads: Stop Denying, Earth is Dying.
Shayna Douglas (unsplash)

Nearly thirty years ago, the international community formally recognized the urgency of the threat posed by climate change through the adoption of the UN Framework Convention on Climate Change (UNFCCC). Yet, based on the current trajectory of global greenhouse gas emissions, we are barreling towards an increase in temperature that far exceeds the 1.5 to two degrees Celsius after which dangerous destabilization of the climate system is possible.

This decades-long gridlock on ambitious climate action has led climate advocates and concerned citizens to search for alternative methods to jumpstart action on climate change. Increasingly, climate activists – including Extinction Rebellion – have been turning to one method in particular: citizens’ assemblies. In this explainer, the Climate Litigation Accelerator (CLX) provides an introduction to this emerging trend.

What Is a Citizens’ Assembly?

Drawing inspiration from examples of participatory democracy in Ancient Greece, citizens’ assemblies are a form of “deliberative mini-publics.” They are usually convened to consider major public policy issues, like electoral reform. Though citizens’ assemblies vary in the details of their institutional design, they tend to share certain core features.

For example, citizens are generally chosen to participate through a random selection process. Citizens’ assemblies work because they’re assumed to be representative of the public at large and not systematically biased towards a particular viewpoint or segment of society. That’s why this step is critical in the assembly design process. 

Once in session, a citizens’ assembly typically begins with a series of activities intended to educate the participants on the issue – or issues – for which the assembly was convened. The educational component is followed by activities intended to provide a space for discussion with fellow citizen participants and deliberation of the issue. This can take place in a variety of forms, including small group discussions and plenary sessions.

The educational efforts and deliberation activities culminate in a final decision rendered by the citizens’ assembly. The nature of that decision depends on the issue under review, but generally the citizens’ assembly will adopt a series of policy proposals or positions on the issue and on sub-topics of the issue.

Citizens Assemblies: Pros and Cons

Advocates of citizens’ assemblies offer a number of justifications for using citizens’ assemblies to shape public policy. One of the most significant is that these assemblies are thought to help break persistent gridlock on major issues within the political system. Advocates for citizens’ assemblies also argue that they enhance the democratic legitimacy of policy choices that involve significant trade-offs and facilitate buy-in for those tough policy choices.

Over the past several decades, there has also been a movement towards incorporating greater public participation in democratic governance. Citizens’ assemblies are one mechanism to do just that, and the evidence demonstrates that citizens’ assemblies are effective tools to increase public engagement. Citizens’ assemblies can also help combat distrust in political institutions, which can endanger the conditions necessary for democracy to thrive.

Skeptics have urged more caution when considering whether to advance citizens’ assemblies. In particular, some observers have argued that citizens’ assemblies may incentivize elected policymakers to “outsource” tough decision-making to these assemblies. There is also no guarantee of a good or appropriate outcome, which is a source of concern for some skeptics. Indeed, given the rising tide of populism and polarization, the assemblies may be unable to reach a consensus or may advance suboptimal policies. 

Can Citizen Assemblies Jumpstart More Ambitious Action on Climate Change?

For many climate advocates, citizens’ assemblies are seen as a key tool in the fight to secure more ambitious action on climate change. For them, the issue is ripe for deliberation by a citizens’ assembly because of the longstanding gridlock that has stymied progress on the issue and because a citizens’ assembly adds legitimacy to the major trade-offs associated with policymaking on climate change.

Some have also argued that citizens’ assemblies are well-positioned to consider long-term problems – which climate change undoubtedly is – “because citizens need not worry about the short-term incentives of electoral cycles, giving them more freedom than elected politicians.”

Climate Citizen Assemblies: A Growing Trend

In spring 2020, British citizens met over six weekends for the U.K. Climate Assembly, where they considered what the United Kingdom should do to reach net zero greenhouse gas emissions by 2050. Ultimately, assembly members adopted a set of recommendations which were released in their final report. It remains to be seen how the government will respond to the Assembly’s findings and whether they will be incorporated into the U.K.’s climate policies.

In 2019 and 2020, French citizens had the opportunity to participate in Convention Citoyenne Pour le Climat, a national citizens’ assembly on climate change. The assembly was tasked with coming up with a series of policy measures, consistent with social justice, that would allow a forty percent reduction in global greenhouse gas emissions by 2030. The assembly’s report was released in 2020; though the ultimate impact of the assembly’s recommendations will become more apparent in the future, French president Emmanuel Macron has indicated that at least some of the assembly’s proposals will be incorporated into French policy.

What’s Next?

Climate advocates are taking citizens’ assemblies, which have historically operated within national boundaries, to the next level. In the fall of 2021, a global citizens’ assembly on climate change will be held in the lead up to COP26, aiming to jumpstart the COP process that has thus far failed to secure the emission reduction commitments necessary to limit global warming to well below two degrees Celsius. CLX will be closely documenting these developments. If citizens at the global assembly can find a path to ambitious climate action, so can global leaders.

March 2, 2021. César Rodríguez-Garavito and Jackie Gallant, The Earth Rights Research & Action program (TERRA Law).

In Markets We Cannot Trust: What the Texas Storm Reveals about Privatized Services

INEQUALITIES

In Markets We Cannot Trust: What the Texas Storm Reveals about Privatized Services

Millions of people in Texas went without power and heat during a brutal winter storm. This avoidable catastrophe was the result of trusting the market and private interests to deliver the public good.

Country Road Illuminated By Traffic in the Night With Stars on Clear Sky
PorqueNoStudios (iStock)

“I’m cold and huddled under blankets,” my mom texted me last week, on her second day without power. She is one of millions in Texas, the largest energy-producer in the United States, who went days without electricity or heat during the recent winter storm that killed 30 people. While local politicians moved quickly to falsely pin the blame on renewable energy, the breakdown in Texas demonstrates the folly of relying on private actors and markets to prepare for climate change, to look after the public good, and to guarantee basic rights.

The Texas power system is built on a “total trust in markets” and the suffering last week is a consequence of that misplaced faith. In 1999, the state deregulated its electricity system to a patchwork of private companies, and it now relies on “nearly unaccountable and toothless” regulatory agencies and voluntary guidelines.

The deregulated private companies predictably chose to prioritize short-term profit over investments in the system. They did not winterize the power grid—ignoring the advice of federal authorities and the lessons of a similar 2011 storm—and neglected to maintain a reserve margin for demand surges, unlike every other power system in North America.

The fallout has been unimaginable. More than 4.2 million households lost power in temperatures as low as 4 degrees Fahrenheit. Although the full death toll won’t be known for weeks, at least 30 people died in Texas, including six experiencing homelessness. Hundreds more were poisoned by their efforts to keep warm, such as running generators indoors. People slept in their cars. Clinics shuttered. People of color and low-income individuals were disproportionately affected, with predominantly Black and Latinx neighborhoods among the first to lose power.

Meanwhile, the deregulated market means some companies will receive an appalling windfall from the storm. Sky-high demand for energy during the cold weather drove prices through the roof, and now people who did not lose power face outrageous energy bills. “My savings is gone,” remarked one Dallas resident who now faces a nearly $17,000 bill. In the city of Denton, the rate per megawatt hour jumped from less than $24 to $2,400. The city will pay over $207 million for four days of power which is more than it spends in a typical year.

However, despite its obvious failures, ideological commitment to the market remains on full display. Before the lights were even back on, politicians were lying about the cause of the outages and exploring how further deregulation could “help.”

This anemic vision of government, which is hardly shared by all Texans but too often dominates policymaking at the state level, can become a self-fulfilling prophecy. Sensible policies that guarantee basic rights but might diminish profit—such as regulation, planning, taxation, and public provision—are routinely written off as extreme because the government has successfully been recast as primarily a facilitator of markets. Capital-friendly decisions are conveniently, if erroneously, peddled as “win-win” and protective of individual freedoms. Neoliberalism has been internalized by the body politic.

Unfortunately, the market alone will never deliver equitable and reliable access to essential services. It cannot, on its own, guarantee the fulfillment of basic rights. Instead, running public services as an investment risks marginalizing their non-commercial purposes. This is why human rights activists, experts, and monitoring bodies routinely raise concerns about the risks of relying on the private sector to provide critical services. Running public services for a profit without robust regulation can lead to inequitable access, high costs, exclusion, and poor maintenance, while wasting taxpayer money and thwarting accountability.

As others have written, this crisis should serve as a “profound warning” in the context of climate change, which will lead to more frequent extreme weather events. Roads, water systems, power grids, housing, and other essential infrastructure desperately need upgrades. Texas shows us that continuing to rely on profit-focused companies to make those changes will leave many stranded. However, it doesn’t have to be this way. Around the world, energy systems are increasingly brought back under public control through a process called remunicipalization, in part due to private actors’ repeated failures to transition to renewable energy.

As people in Texas stood for hours in lines to enter bare grocery stores for the second time in less than a year, my sister wrote to me: “I now feel acutely aware of the fact that I will not be taken care of in a disaster. People will not turn on your lights, people will not give you heat when it’s freezing, people will not make sure you have good drinking water, and people will not make sure you don’t die of a horrible illness.” If markets continue to be allowed to stand in for government, she will be right.

February 23, 2021. Rebecca Riddell, Human Rights and Privatization Project at the Center for Human Rights and Global Justice at NYU School of Law.

Putting Profit Before Welfare: A Closer Look at India’s Digital Identification System

TECHNOLOGY & HUMAN RIGHTS

Putting Profit Before Welfare: A Closer Look at India’s Digital Identification System 

Aadhaar is the largest national biometric digital identification program in the world, with over 1.2 billion registered users. While the poor have been used as a “marketing strategy” for this program, the “real agenda” is the pursuit of private profit.

Over the past months, the Digital Welfare State and Human Rights Project’s “Transformer States” conversations have highlighted the tensions and deceits that underlie attempts by governments around the world to digitize welfare systems and wider attempts to digitize the state. On January 27, 2021, Christiaan van Veen and Victoria Adelmant explored the particular complexities and failures of Aadhaar, India’s digital identification system, in an interview with Dr. Usha Ramanathan, a recognized human rights expert.

What is Aadhaar?

Aadhaar is the largest national digital identification program in the world; over 1.2 billion Indian residents are registered and have been given unique Aadhaar identification numbers. In order to create an Aadhaar identity, individuals must provide biometric data including fingerprints, iris scans, facial photographs, and demographic information including name, birthdate and address. Once an individual is set up in the Aadhaar system (which can be complicated depending on whether the individual’s biometric data can be gathered easily, where they live and their mobility), they can use their Aadhaar number to access public and, increasingly, private services. In many instances, accessing food rations, opening a bank account, and registering a marriage all require an individual to authenticate through Aadhaar. Authentication is mainly done by scanning one’s finger or iris, though One-Time Passcodes or QR codes can also be used.

The welfare “façade”

Unique Identification Authority of India (UIDAI) is the government agency responsible for administering the Aadhaar system. Its vision, mission, and values include empowerment, good governance, transparency, efficiency, sustainability, integrity and inclusivity. UIDAI has stated that Aadhaar is intended to facilitate “inclusion of the underprivileged and weaker sections of the society and is therefore a tool of distributive justice and equality.” Like many of the digitization schemes examined in the Transformer States series, the Aadhaar project promised all Indians formal identification that would better enable them to access welfare entitlements. In particular, early government statements claimed that many poorer Indians did not have any form of identification, therefore justifying Aadhaar as a way for them to access welfare. However, recent research suggests that less than 0.03% of Indian residents did not have formal identification such as birth certificates.

Although most Indians now have an Aadhaar “identity,” the Aadhaar system fails to live up to its lofty promises. The main issues preventing Indians from effectively claiming their entitlements are:

  • Shifting the onus of establishing authorization and entitlement onto citizens. A system that is supposed to make accessing entitlements and complying with regulations “straightforward” or “efficient” often results in frustrating and disempowering rejections or denials of services. The government asserts that the system is “self-cleaning,” which means that individuals have to fix their identity record themselves. For example, they must manually correct errors in their name or date of birth, despite not always having resources to do so.
  • Concerns with biometrics as a foundation for the system. When the project started, there was limited data or research on the effectiveness of biometric technologies for accurately establishing identity in the context of developing countries. However, the last decade of research reveals that biometric technologies do not work well in India. It can be impossible to reliably provide a fingerprint in populations with a substantial proportion of manual laborers and agricultural workers, and in hot and humid environments. Given that biometric data is used for both enrolment and authentication, these difficulties frustrate access to essential services on an ongoing basis.

Given these issues, Usha expressed concern that the system, initially presented as a voluntary program, is now effectively compulsory for those who depend on the state for support.

Private motives against the public good

The Aadhaar system is therefore failing the very individuals it was purported to be designed to help. The poorest are used as a “marketing strategy,” but it is clear that private profit is, and always was, the main motivation. From the outset, the Aadhaar “business model” would benefit private companies by growing India’s “digital economy” and creating a rich and valuable dataset. In particular, it was envisioned that the Aadhaar database could be used by banks and fintech companies to develop products and services, which further propelled the drive to get all Indians onto the database. Given the breadth and reach of the database, it is an attractive asset to private enterprises for profit-making and is seen as providing the foundation for the creation of an “Indian Silicon Valley.” Tellingly, the acronym “KYC,” used by UIDAI to assert that Aadhaar would help the government “know your citizen” is now understood as “know your customer.”

Protecting the right to identity

The right to identity cannot be confused with identification. Usha notes that “identity is complex and cannot be reduced to a number or a card,” because doing so empowers the data controller or data system to effectively choose whether to recognize the person seeking identification, or to “paralyse” their life by rejecting, or even deleting, their identification number. History shows the disastrous effects of using population databases to control and persecute individuals and communities, such as during the Holocaust and the Yugoslav Wars. Further, risks arise from the fact that identification systems like Aadhaar “fix” a single identity for individuals. Parts of a person’s identity that they may wish to keep separate—for example, their status as a sex worker, health information, or socio-economic status—are combined in a single dataset and made available in a variety of contexts, even if that data may be outdated, irrelevant, or confidential.

Usha concluded that there is a compelling need to reconsider and redraw attempts at developing universal identification systems to ensure they are transparent, democratic, and rights-based. They must, from the outset, prioritize the needs and welfare of people over claims of “efficiency,” which in reality, have been attempts to obtain profit and control.

February 15, 2021. Holly Ritson, LLM program, NYU School of Law; and Human Rights Scholar with the Digital Welfare State and Human Rights Project.

GJC Issues Statement on the Constitutional and Human Rights Crisis in Haiti

HUMAN RIGHTS MOVEMENT

GJC Issues Statement on the Constitutional and Human Rights Crisis in Haiti

The Global Justice Clinic, the International Human Rights Clinic at Harvard Law School, and the Lowenstein International Human Rights Clinic at Yale Law School issued a statement on February 13, 2021 expressing grave concern about the deteriorating human rights situation in Haiti. Credible evidence shows that President Jovenel Moïse has engaged in a pattern of conduct to create a Constitutional crisis and consolidate power that undermines the rule of law in the country. The three clinics call on the U.S. government to denounce recent acts by President Moïse that have escalated the constitutional crisis. They urge the U.S. to halt all deportation and expulsion flights to Haiti in this fragile time; to condemn recent violence against protestors and journalists; and to call for the release of those arbitrarily detained. With long experience working in solidarity with Haitian civil society, the clinics urge the U.S. government to recognize the right of the Haitian people to self-determination by neither insisting on nor supporting elections without evidence of concrete measures to ensure that they are free, fair, and inclusive.

The Clinics also sent a letter expressing similar concerns to the member states of the United Nations Security Council ahead of their meeting on February 22, 2021, which is expected to include a briefing on Haiti from the Special Representative of the Secretary-General and head of the UN Integrated Office in Haiti (BINUH).

February 14, 2021

This post reflects the statement of the Global Justice Clinic, and not necessarily the views of NYU, NYU Law, or the Center for Human Rights and Global Justice.

On the Frontlines of the Digital Welfare State: Musings from Australia

TECHNOLOGY & HUMAN RIGHTS

On the Frontlines of the Digital Welfare State: Musings from Australia

Welfare beneficiaries are in danger of losing their payments to “glitches” or because they lack internet access. So why is digitization still seen as the shiny panacea to poverty?

I sit here in my local pub in South Australia using the Wi-Fi, wondering whether this will still be possible next week. A month ago, we were in lockdown, but my routine for writing required me to leave the house because I did not have reliable internet at home.

Not having internet may seem alien to many. When you are in a low-income bracket, things people take for granted become huge obstacles to navigate. This is becoming especially apparent as social security systems are increasingly digitized. Not having access to technologies can mean losing access to crucial survival payments.

A working phone with internet data is required to access the Australian social security system. Applicants must generally apply for payments through the government website, which is notorious for crashing. When the pandemic hit, millions of the newly-unemployed were outraged that they could not access the website. Those of us already receiving payments just smiled wryly; we are used to this. We are told to use the website, but then it crashes, so we call and are put on hold for an hour. Then we get cut off and have to call back. This is normal. You also need a phone to fulfill reporting obligations. If you don’t have a working phone, or your battery dies, or your phone credit runs out, your payment can be suspended through the assumption that you’re deliberately shirking your reporting obligations.

In the last month, I was booted off my social security disability employment service. Although I had a certified disability affecting my job-seeking ability, the digital system had unceremoniously dumped me onto the regular job-seeking system, which punishes people for missing appointments. Unfortunately, the system had “glitched,” a popular term used by those in power for when payment systems fail. After narrowly missing a scheduled phone appointment, my payment was suspended indefinitely. Phone calls of over an hour didn’t resolve it; I didn’t even get to speak to a person, who could have resolved the issue. This is the danger of trusting digital technology above humans.

This is also the huge flaw in Income Management (IM), the “banking system” through which social security payments are controlled. I put “banking system” in quotation marks because it’s not run by a bank; there are none of the consumer protections of financial institutions, nor the choice to move if you’re unhappy with the service. The cashless welfare card is a tool for such IM: beneficiaries on the card can only withdraw 20% of their payment as cash, and the card restricts how the remaining 80% can be spent (for example, purchases of alcohol and online retailers like eBay are restricted). IM was introduced in certain rural areas of Australia deemed “disadvantaged” by the government.

The cashless welfare card is operated by Indue, a company contracted by the Australian government to administer social security payments. This is not a company with a good reputation for dealing with vulnerable populations. It is a monolith that is almost impossible to fight. Indue’s digital system can’t recognize rent cycles, meaning after a certain point in the month, the ‘limit’ for rent can be reached and a rent debit rejected. People have had to call and beg Indue to let them pay their landlords; others have been made homeless when the card stopped them from paying rent. They are stripped of agency over their own lives. They can’t use their own payments for second-hand school uniforms, or community fêtes, or buying a second-hand fridge. When you can’t use cash, avenues of obtaining cheaper goods are blocked off.

Certain politicians tout the cashless welfare card as a way to stop the poor from spending on alcohol and drugs. In reality, the vast majority affected by this system have no such problems with addiction. But when you are on the card, you are automatically classified as someone who cannot be trusted with your own money; an addict, a gambler, a criminal.

Politicians claim it’s like any other card, but this is a lie. It makes you a pariah in the community and is a tacit license for others to judge you. When you are at the whim and mercy of government policy, when you are reliant on government payments controlled by a third party, you are on the outside looking in. You’re automatically othered; you’re made to feel ashamed, stupid, and incapable.

Beyond this stigma, there are practical issues too. The cashless welfare card system assumes you have access to a smartphone and internet to check your account balance, which can be impossible for those with low incomes. Pandemic restrictions close the pubs, universities, cafes, and libraries which people rely on for internet access. Those without access are left by the wayside. “Glitches” are also common in Indue accounts: money can go missing without explanation. This ruins account-holders’ plans and forces them to waste hours having non-stop arguments with brick-wall bureaucracy and faceless people telling them they don’t have access to their own money.

Politicians recently had the opportunity to reject this system of brutality. The “Cashless Welfare Card trials” were slated to end on December 31, 2020, and a bill was voted on to determine if these “trials” would continue. The people affected by this system already told politicians how much it ruins their lives. Once again, they used their meager funds to call politicians’ offices and beg them to see the hell they’re experiencing. They used their internet data to email and rally others to do the same. I personally delivered letters to two politicians’ offices, complete with academic studies detailing the problems with IM. For a split second, it seemed like the politicians listened and some even promised to vote to end the trials. But a last-minute backroom deal meant that these promises were broken. Lived experiences of welfare recipients did not matter.

The global push to digitize welfare systems must be interrogated. When the most vulnerable in society are in danger of losing their payments to “glitches” or because they lack internet access, it begs the question: why is digitization still seen as the shiny panacea to poverty?

February 1, 2021. Nijole Naujokas, an Australian activist and writer who is passionate about social justice rights for the vulnerable. She is the current Secretary of the Australian Unemployed Workers’ Union, and is doing her Bachelor of Honors in Creative Writing at The University of Adelaide.