The murder of MP David Amess has reminded us all of the threats and abuse that those in public life face, particularly online. This has led to renewed calls to ban anonymous online social media accounts. In a recent piece, Harry Dyer argues that this is not the answer, that anonymity can in fact be a form of protection for marginalised communities.
Ending online anonymity and the risks for the marginalised
There was so much I agreed with in Dyer’s article. For instance, there is no clear causal link between online abuse and the tragic death of Amess; it cannot be said that banning anonymous accounts would end all abuse; removing anonymity could have serious consequences for marginalised groups. Indeed the ability to be anonymous online has clearly been a lifeline for many.
It is also true that many of those who disseminate the most egregious disinformation and throw the most vitriolic abuse online do so in their own names. There is a problem with online discourse that goes beyond the debate over anonymity.
Nevertheless, I would suggest that we are being presented with a false choice. Just because some abusers do so in their own name does not mean we should not try to act against those who do so behind a fake name or avatar. And while we should obviously strive to preserve the freedom for users to express themselves without fear of identification and retaliation, that does not mean that everyone should be left exposed to the toxic, racist and misogynistic accounts that are responsible for so much online harm.
Clean Up The Internet’s campaign to improve online transparency
At Clean Up The Internet we campaign to improve the level of discourse online. We have commissioned research, available on our site, that shows that anonymous accounts are disproportionately responsible for the circulation and amplification of disinformation, for example concerning the covid/5G conspiracy.
We have also addressed the question of abuse, in particular racist abuse, and here it all gets rather murky. We know from speaking to anti-racist organisations and to others who track down racist trolls, that a large number of the accounts responsible are anonymous, or certainly not identifiable. Yet, in response to the abuse thrown at the England footballers following the Euros final, Twitter came out publicly with a claim that “99% of the accounts suspended were not anonymous”. And that claim was uncritically repeated by a number of journalists and commentators who frankly should have known better.
As an ex lawyer, I remember that whenever a client wanted to push a rather ambitious argument we would ask ourselves whether it would ‘pass the smile test’. A claim that 99 percent of the Twitter accounts responsible for racist abuse were not anonymous, must surely fail that test with anyone who has been on that platform for more than a few minutes. The only way to make sense of it, would be that Twitter is using its own idiosyncratic definition of ‘anonymity’ that would not be recognised by the average citizen.
Were Twitter right in their estimates?
As you will see from the blog page on our site, I wrote to Twitter in August asking them to explain how they got to 99 percent. At the request of Kick It Out I copied them into the letter. After all, I reasoned that Twitter might well ignore us but they would at least have the courtesy to reply to the leading organisation working to eradicate racism from football.
When no reply was forthcoming a British MP asked to be copied into the follow up, and following a further chaser from her office we finally received a reply last week. This confirmed that Twitter defines an account as ‘not anonymous’ if the user has also provided a phone number or email address, even if the phone could be a cheap burner and the email address could be [email protected].
The Twitter response and the delay in obtaining it did not really come as a great surprise. A few weeks ago, I watched a Channel 4 documentary presented by the ex-footballer Jermaine Jenas. One aspect that stood out was that when he and the police tried to pursue anonymous accounts responsible for extreme racist abuse directed at him, Twitter would not cooperate to provide details of the perpetrators, or even act to take down many of the offending posts.
The solution going forward: online verification
So what’s the solution? We agree that we don’t want to ban anonymity. But how about if we turned this debate on its head? We could instead ask the platforms to allow all of us who want to be verified to do so, and have a mark against our account (perhaps a tick) to show to all other users that we are who we pretend to be. Then the platforms could give us the ability to decline to interact with, to receive replies from, accounts that are not verified. This three-point solution would not be difficult to implement.
At a stroke that could have a major impact. Footballers, politicians, celebrities and the rest of us who want to communicate with a wide audience could do so, but could choose only to see the replies that come from verified accounts. No longer would we first have to endure the abuse and then individually mute or block accounts, only to see them swiftly succeeded by another account controlled by the same person.
Third party tools for verification
Some will argue that verification has its own challenges, and indeed it does. But while we know from polling that the vast majority of users would be happy to verify, we must stress that under our proposal nobody would be compelled to verify if they did not want to do so. Moreover there are good trusted third party tools out there that can provide verification without us having to hand over more data to the platforms, some of whom cannot be trusted not to exploit it to target us with ads, or worse. It’s not our job at Clean Up The Internet to promote any particular providers, but those interested could look at OneID or Yoti.
If they wanted, the platforms could introduce the three-point solution we advocate within a very short period and could also agree to work with reputable third party verifiers. Why would they not? Well, if verification becomes the norm, that might reveal how far user numbers are inflated. A reduction in online abuse and anger might reduce ‘engagement’. And independent verification could eat into their treasure troves of data.
The necessity of tangible legislation for curbing online abuse
And that is probably why these changes, and other reasonable protections demanded by citizens, won’t be brought in voluntarily. There is going to be a need for legislation. We are campaigning with many others to get appropriate language inserted into the online safety bill. To assist that process, Siobhan Baillie MP is sponsoring a private member’s bill on 24 November that will help to establish parliamentary support for our proposals. The online safety bill is likely to be the best chance we will have for some years to effect real improvements in the online space, and (to borrow a phrase much loved by one of the worst offenders), ‘drain the swamp’.
Of course, that will still leave those who disseminate hatred in their own names. But even they are amplified and given apparent support by armies of fake accounts, bots and others who they can mobilise to pile-on to those they attack. As Dyer pointed out, users “take their cues” from other posts, many of which are from accounts that are not identifiable but which they believe must reflect wider public opinion. Our proposal would do much to reduce the reach and impact of such divisive messages. We might even suggest that the platforms only show a ‘follower count’ that corresponds to genuine verified accounts, which could be very revealing in relation to the vastly inflated follower counts of a number of high-profile shock jocks and even certain politicians.
There is much to do. But we shouldn’t be put off starting just because ‘it’s complicated’. And we must try to identify and mitigate any possible unintended consequences. But essentially, if you have a right to speak, don’t I at least have a right not to listen to you?