The Coronavirus: lessons learned to date report catalogues the government failures that have led to one of the highest levels of infection and deaths in the world. The all party report was published by the House of Commons health and social care and science and technology committees on 12 October.
Many of the errors are attributed to government mismanagement or inactivity at key points. But it also opens with the criticism that:
“The UK’s pandemic planning was too narrowly and inflexibly based on a flu model which failed to learn the lessons from SARS, MERS and Ebola. The result was that whilst our pandemic planning had been globally acclaimed, it performed less well than other countries when it was needed most.”
In the early stages of the pandemic, in April 2020, Dr Stella Perrott raised concerns about whose science and whose data was driving the government’s response to the pandemic. She argued that the narrow range of scientific research and data that the government counted as being worthy of consideration and influence was part of the problem of its mismanagement.
The piece, reproduced here, is a timely reminder of the dangers of narrow or group thinking, then being pursued by Dominic Cummings, the PM’s adviser.
Scientific authority, whose data counts?
Article originally published 9 April 2020
The scream of the motos struggling up the hill taking their owners to work suddenly stopped and even the dogs were instantly silenced, with no passers-by to bark at as the Spanish lockdown was introduced. It was immediate, and immediately enforced with loud hailers touring the streets telling people to stay inside. Police patrolled the roads into the towns and cities. Why had Spain been in total lockdown for a week before the United Kingdom even tentatively closed the schools and pubs but let non-essential employment continue? Both countries had access to the same data and science on which to formulate a response yet came to very different conclusions about how to manage the crisis.
Mostly, we think of science as factual and without bias. It can be tested and revised, updated and complemented but remains obdurately neutral, or so we think. In ‘following the science’, the government was urging us to place trust in their decisions and advice as though there was just one science from which logic inescapably flowed. Many welcomed a more scientific and less politically ideological approach to government policy. But the commissioning of data and research, the methodology chosen and the analysis and interpretation are not neutral processes and can strongly reflect the biases of those commissioning or leading the work. Science is as contested as any other area of human endeavour.
The initial UK government approach, so called ‘herd immunity’, anticipated a 60 to 80 percent infection rate with subsequent immunity amongst those contracting the virus. This approach differed from the rest of the world and contrasted with the received wisdom of the World Health Organization (WHO) that sought sustained, energetic suppression, testing and follow up. The ‘precautionary principle’ of health policy makers and the ‘do no harm’ principle of doctors seemed no longer to apply. The UK had become an untested ‘start up’ for a new way of tackling disease with an unproven approach being tried on an unsuspecting population. Inevitably then, the science on which government decisions were made became the focus of press and academic attention.
The biases and limitations of the SAGE advice
The Scientific Advisory Group for Emergencies (SAGE): Coronavirus (COVID-19) Response scientific papers were eventually published on 19 March 2020. From the publication of these papers, we learned that the key sciences valued by the government were medical, behavioural and data sciences with a lead role for epidemiology. Almost all the papers published on the website are recent, theoretical, UK-led and data driven. In other words they prioritise data science over learned ‘on the ground’ experience such as that accrued in South East Asian countries in the SARS outbreak of 2003. The papers also reveal a highly data-driven approach to anticipating citizen behaviour in the event of strong controls over personal freedoms being introduced.
The selection of the research and data that ‘counted’ for government policy-making suggests a fairly close match with the type of data and research sought and adopted, very successfully, by Dominic Cummings, No 10’s chief policy advisor, in both the Leave campaign and the Conservative general election victory. On the back of these successes, Cummings has made no secret of his desire for government policy to be (big) data led, predictive and collated in real time. To achieve this end, he has sought to fill government posts with “data scientists and software developers; economists; policy experts; project managers; communication experts; junior researchers”.
Clearly then while ‘the science’ influenced government policy, government policy was also driving the science, at least in so far as what ‘counts’ as authoritative. The SAGE research papers indicate reliance on a particularly British strand of new, emerging and real-time data and research, almost to the exclusion of learning from those countries that experienced SARS in 2003 or the accumulated wisdom the WHO. It is this same belief in British exceptionalism and superiority, ‘we know best’ that has characterised the Brexit negotiations and the government’s response to the environmental crisis. It is not that the approach to the science is wrong. Rather, it is that it is incomplete and the inherent methodological biases are not corrected by alternative or supplementary research and data.
Bias in UK pandemic priorities
Not only did the choice of data and research reflect an inappropriate national bias, so too did it reflect an inappropriate gender bias. Fears of rebellious, anti-social and disruptive behaviour were researched and enumerated with recommendation for their mitigation, leading to the government’s initial reluctance to introduce strong measures of social control or ‘lockdown’ as has occurred in other countries. These same fears of right-wing male-dominated civil disobedience were also apparent (or perhaps utilised) in the government’s commitment to achieving Brexit, almost at any cost, with riots anticipated or threatened if we failed to leave.
In prioritising and responding to the risks posed by (primarily) males as an area in which research and data needed to be gathered and measured to inform policy, the risks of (primarily) women not being able to find food for their families or care for their children has been neglected, resulting in the panic buying and despair of some shoppers as they return again and again to empty shelves in between their nursing or care shifts.
The heavy focus on the demands of hospitalisation with its medical model of intervention as a necessary area for data collection, analysis and action, has led to insufficient information about pre, post or alternatives to hospitalisation. It is likely that most deaths will occur at home or in care homes while the most intensive and prolonged nursing will be needed outside the hospital setting. The primacy of data modelling on the impact on hospitals neglects the continuing need for social care and nursing input, care that is primarily dependent on a largely female paid and unpaid workforce who may not be in a position to provide it. The ‘success’ of the untested strategy of ‘herd immunity’ was predicated on people dying prematurely, reasonably quickly and in hospital. It did not take account of the likelihood of a substantial number of living but debilitated older people dependent on continuing health and social care.
The data scientists and epidemiologists have provided a wealth of important and necessary information on disease trajectory and how mitigation might affect that. In China, the data collected by the state on people’s mobile phones has enabled the tracking of contacts and alert messages being automatically sent to them when a new case was identified. Whether we agree or not with that approach, we can recognise the importance and power of new disciplines in shaping prevention strategies. The discussion here should not undermine that valuable contribution but, rather, highlight the limits of each particular discipline or type of knowledge.
The inherent biases and gaps in the knowledge base may go some way to explain why the government’s approach has had but limited success so far in both stemming the spread of Covid-19 and managing the deleterious social consequences that are also now emerging. Other voices and approaches beyond the masculine, British-centric, big-data approach to managing the coronavirus need to be sought. Partial or biased data and science leads to partial or biased policies and approaches.