A-level furore the tip of the iceberg: how digital technology and artificial intelligence threaten social welfare, social rights and social justice

The scandal around this year’s A-level results, and in particular the application of an algorithm to help predict students’ grades based on the historic performance of their school, drew unprecedented attention to the use of digital technology solutions in national social policy.

The consequent public outcry eventually forced Boris Johnson and Gavin Williamson into another embarrassing U-turn. Yet in truth the episode represented merely the ‘tip of an iceberg’ in terms of how governments around the world are mobilising artificial intelligence, often with little public consultation or parliamentary oversight, to improve the ‘efficiency’ and ‘cost effectiveness’ of social service provision – whether in education, health or welfare.

Notwithstanding this summer’s exam furore, this revolution is happening largely below the radar of opposition parties, journalists and the general public, yet it has unprecedented – and often disastrous – implications for social welfare, social rights and social justice, risking the creation of what the American political scientist Virginia Eubanks has called a “digital poorhouse”.   

The UN, amongst others, has begun raising the alarm about this quiet revolution and what it could mean for global efforts to eradicate poverty, achieve progress towards the sustainable development goals for ‘leaving no one behind’, arrest rising inequalities between and within countries, and protect human rights (including social rights such as the right to education, the right to health, the right to housing, and the right to food).

In October 2019, in a speech to the Japan Society in New York, UN High Commissioner for Human Rights Michelle Bachelet juxtaposed talk of the “enormous benefits” of digital technology for human rights, social justice and the fight against poverty with warnings that such technology may also be used, either accidentally or deliberately, to undermine or violate human rights. In late 2019, in his final report to the UN General Assembly, Philip Alston, the then UN special rapporteur on extreme poverty, put it more starkly: the world, he said, is “stumbling zombie-like into a digital welfare dystopia”.

Bachelet’s point that the negative impacts of digital technology on human rights can occur unintentionally is an important one. Indeed, most technology-related human rights abuses probably fall into this category. As noted in her speech, these abuses “are not the result of a desire to control or manipulate, but [are rather] by-products of a legitimate drive for efficiency and progress”. For example, algorithms designed to make social security systems more efficient (and therefore support economic and social rights) may end up exacerbating inequalities. There are risks inherent in digital systems and artificial intelligence, in that they “create centres of power, and unregulated centres of power always pose risks – including to human rights”.

We already know what some of these risks look like in practice: recruitment programmes that systematically downgrade women; systems that classify black suspects as more likely to reoffend; or predictive policing programs that lead to over-policing in poor or minority-populated areas. The people most heavily impacted are likely to be at the margins of society. Only a human rights approach that empowers people as individual holders of legally enforceable rights can adequately address these challenges.

“To respect these rights in our rapidly evolving world,” concluded the high commissioner, “we must ensure that the digital revolution is serving the people, and not the other way around. We must ensure that every machine-driven process or artificial intelligence system complies with cornerstone principles such as transparency, fairness, accountability, oversight and redress.”

Putting technology at the service of equality and social justice

One of the ways in which digital technology is supposedly being mobilised to support human rights is through the digitalisation of social security systems. This example also provides an instructive case study as to how such schemes, albeit conceived to improve efficiency and cost-effectiveness, may result in the violation of rights and the diminishing of human dignity.

At the heart of this case study lies a simple set of questions:

  • Can machine learning replace the experience, intuition and judgment of human beings at the point of delivery?
  • Can artificial intelligence effectively and compassionately judge which families need what kind of help most urgently?
  • Can, in short, algorithms be relied upon to respect, promote and protect human rights without discrimination?

To help answer these questions, in September 2018 the Guardian surveyed a range of local councils (as social service providers) in the UK that were each pioneering new “predictive analytics” systems to identify families and children in need of interventions to prevent child abuse. As well as raising data privacy concerns, the investigation heard that the new systems “inevitably incorporate the biases of their designers, and risk perpetuating stereotyping and discrimination while effectively operating without any public scrutiny”.

These concerns were echoed a year later in an article by Ed Pilkington entitled “Digital dystopia: how algorithms punish the poor”. The article focused on a quiet “sea-change” around the world in how governments treat the poor, powered by artificial intelligence, predictive algorithms, risk modelling and biometrics that only mathematicians and computer scientists fully understand. And yet, he noted, “if you are one of the millions of vulnerable people at the receiving end of this radical reshaping, you know it is real and that its consequences can be serious – even deadly”. Access to unemployment benefits, child support, housing and food subsidies, and much more, is being digitised and automated, replacing the judgment of human caseworkers with “the cold, bloodless decision-making of machines”.


More articles by Marc Limon:


In Illinois (US), for example, algorithms have been used to recalculate welfare payments. Those who have received too much (in some cases, across periods of more than 30 years) have been automatically instructed to pay it back. Similar cases are reported in Australia, where vulnerable and marginalised individuals have been ordered to pay back social security benefits because of a “flawed algorithm”. In Newcastle in the UK, claimants have spoken of a climate of “fear” and “panic”, as social security benefits are changed by “a new generation of welfare robots” without warning, without explanation and without remedy.

These three examples alone are said to have affected millions of people, with the poorest and most vulnerable paying the highest price. Similarly, in India, technical problems with the country’s ‘Aadhaar’ system, a 12-digit unique identification number linked with people’s biometric data, have resulted, in some cases, in destitution, starvation and suicide.

In each of these cases, digital technology solutions are being widely rolled out across welfare systems with minimal public consultation and minimal parliamentary scrutiny. All too often the real motives are, in the words of Philip Alston, “to slash spending, set up intrusive government surveillance systems, and generate profits for private corporate interests”.

Opening the eyes of world governments

As the UN has recognised, the “digital transformation” of our society, including social service provision, has the potential to:

“Accelerate human progress, promote and protect human rights and fundamental freedoms, bridge digital divides, support, inter alia, the enjoyment of the rights of persons with disabilities, the advancement of gender equality … and ensure that no one is left behind in the achievement of the Sustainable Development Goals”.

Yet, as the high commissioner for human rights noted in her speech in New York, its “unquestionable benefits do not cancel out its unmistakable risks”. Unless the steady creep of algorithms, machine learning and artificial intelligence into the provision of national health, social and education services is subjected to full public and parliamentary scrutiny, there is a clear risk that rather than promote human rights and human dignity, and help tackle inequality and discrimination, digital technology could – wittingly or unwittingly – become an agent of unaccountable central control, social exclusion and punishment.

Thus far, world governments have not taken this threat seriously enough. As the world faces severe economic retractions, increased unemployment and growing poverty as a result of the Covid-19 pandemic, the question of whether we, and most importantly our elected representatives, are capable of doing a better job in the future is one – quite literally – of life and death.

Can you help us reach more readers?