Very bad laws are still laws. The Tories are about to pass one that will make the damage caused by Brexit seem like a fart in a hurricane: the online safety bill. It will ruin your experience of the internet. Guaranteed.
We need to ease into the subject, because it’s quite complicated. But that complexity hides enormous danger. The online safety bill has the potential to wreck the UK economy far worse than Brexit. It would cut the UK off from the normal internet the rest of the world experiences.
Concerns over the online safety bill
To give you an idea, here are some of the companies that have said they may no longer service the UK market or cut back on what they offer UK users if the online safety bill comes in:
- WhatsApp: ‘UK ministers told WhatsApp will leave country if Online Safety Bill isn’t modified’
- Wikipedia: ‘Wikipedia could shut down in UK after online safety law passes, Government told’
- Signal: ‘Signal says it’ll shut down in UK if Online Safety Bill approved’
- Apple: ‘Apple joins opposition to encrypted message app scanning’
Before we go any further, it’s vital to understand one thing: the rest of the world will keep going, even after the online safety bill comes in. Huge global firms (the ones we all rely on for communications, social media and discussion platforms) won’t change the way they operate around the world just to fall in line with draconian rules in one tiny market.
Imagine that the UK passed a law that all new cars sold here must come with five wheels and a solid gold steering wheel. How many manufacturers would produce models that aligned with this requirement? Between zero and none.
Why are these changes happening now?
This is happening now, because the MPs involved in ramming the law through really don’t understand the technology – or the markets – that would inevitably be impacted. The world literally doesn’t work the way they think it does, if they’ve thought about it at all.
One of their key luminaries at an earlier stage of the process was Nadine Dorries.
There are a couple of reasons this is only now coming to our attention:
- It’s so stupid-beyond-all-comprehension that most of the people who do understand it assumed it wouldn’t happen. (A dangerous assumption, as past events have proven.) That said, many have poured a huge amount of effort over several years into trying to fight it.
- The bill is getting to the end stage of the process that will make it law, after many years of wrangling. Despite its bone-jarring self-destructiveness, it’s still happening. It’s like the Terminator: it just won’t stop. Ever. Until the UK internet is dead.
Given how serious this is, you would have expected there to have been more fuss. But the complexities really are hard to explain. And it’s so ridiculously harmful, it’s almost impossible to persuade people it’s for real. It’s also so ridiculously harmful, that it’s almost impossible to believe the government would go ahead with it. And yet now, at report stage, it’s very nearly law.
The main issues
- Once the bill passes, the online safety law (OSL) will apply to all companies with users in the UK (the companies themselves don’t have to be based here). Therefore it will be easier for global firms to just give up on the UK than it will be for them to comply.
- The OSL will break the safety of the internet by requiring firms to implement back doors in their encryption so that the content of messages and other online activity can be monitored for harmful material. But at the same time, they are meant to preserve strong encryption to keep users safe. It would be like mandating that every house in England be sold with a front door lock that can be opened with a skeleton key the government (and its lackeys) can open. Think TSA locks and luggage. It’s instantly obvious that as soon as that skeleton key inevitably falls into the wrong hands, nothing is safe anymore.
- The government will have unprecedented censorship powers, since it will be able to define what is ‘harmful’ (including designating ‘harmful but not illegal’ content as also being unacceptable). This will happen at the secretary of state level, without oversight or the input of parliament. The government will also be able to order Ofcom to interpret the law in any way it desires. So platforms will be required to moderate what the government sees as ‘legal but harmful content’, suppressing free speech and dissent whenever it likes.
- The large tech firms become an army of government enforcers. They’re the ones having to comply with whatever Big Brother says is unacceptable speech that week, and suppress it on their platforms. Huge fines and other penalties will be part of the bill to ensure their cooperation. Again, easier to just throw their hands up in disgust and leave the UK.
- In wild ‘think of the children’ panic mode, the law will require platforms to protect children from accessing age-inappropriate content or services by implementing age verification. That’s right. The same stuff that guards the doorway to some porn sites now may be coming to your social media platform tomorrow. Except that it won’t, because the social media firm will just dump UK users.
- The bill applies to search engines too, forcing them to take measures to prevent users from accessing harmful or illegal content through their services. Again, this is an absolutely wild imposition when you consider that ‘harmful’ is whatever the most prissy, swoon-prone minister defines it as of that day.
- According to the government’s impact assessment, over 25,000 companies will be affected by the bill, including retail websites, blogging platforms, forums, dating sites, online gaming, social media companies, large search engines, streaming services. (That sounds like a rather sizeable chunk of the modern internet, don’t you think?)
There’s more (so much more) but let’s leave the issues there, because life’s literally too short and the damaging nature of the law should already be crystal clear to you. The main text of the bill plus supporting documentation runs to 52,000 words.
Now let’s talk money and practicalities
This is going to cost firms billions and billions of pounds to implement. The content moderation alone runs to £2.5bn. And that’s before significant additional unknown costs that the government admits exist but can’t actually quantify.
“The final [impact assessment] also estimated indicative costs from potential additional content moderation required to comply with the duties of between £1,319.1 million and £2,486.2 million. Given the new duties apply to all content which services remove or restrict access to instead of a defined list of priority harmful content to adults, there are likely to be additional costs to businesses from enforcing these terms of service.”
That’s just one tiny part of the additional burden the online safety bill will impose on businesses.
But of course, there are costs at every single stage, from understanding the impact of the bill (much greater than GDPR, and you may be familiar with how much of a hassle that has proven) to working out how to implement its requirements, to bringing in or developing the technologies to meet some of the bill’s harsher demands.
“A common theme of the government’s engagement with in-scope platforms is that they are unable at this stage to provide reasonable estimates of costs or even actions likely to be taken to comply.”
They literally don’t know how much it will cost businesses or how firms will have to implement the law, so they plan to estimate that AFTER the bill has already passed through parliament and received royal assent.
The impact assessment runs to 108 pages of barely qualified nonsense (the equivalent of building a house out of mud, offal and unicorn droppings and calling it a home) and it’s absolutely bulging with things that can’t be costed or measured, because nobody can anticipate them. How much will it hurt to jump off that cliff onto the pointy rocks below? The only way to find out is to do it …
Again, and it’s like a broken record at this point: many firms will throw up their arms in disgust and quit the UK instead.
The impact assessment
It’s impossible to do justice to something that is the logic-and-reasoning equivalent of a painting produced by monkeys throwing their faeces around, but let’s pull out just a few of the many painful passages from the impact assessment …
These are the things the impact assessment recognises the online safety bill will impose on businesses. Some are quick and easy. Others are enormously complex, and may entail sophisticated systems and processes that have to be created from scratch:
- Reading and understanding the regulations (familiarisation costs) – this includes both primary legislation and related secondary, and future statutory codes of practice.
- Ensuring users are able to report harm – this relates to the mechanism through which users can report harm and could be as simple as a visible email address (already a statutory requirement) or a system which can triage large volumes of reports.
- Updating terms of service – evidence suggests that this is a business-as-usual activity for in-scope platforms. However, platforms may decide to assess and update their terms of service in response to future codes of practice.
- Conducting risk assessments – this relates to the requirement to carry out an illegal content risk assessment and ‘if likely to be accessed by children’ to carry out a children’s risk assessment. For Category 1 platforms, as part of this they will also have to assess the risk of legal but harmful content.
- Undertaking additional content moderation – the online safety bill does not require additional content moderation; however, it is likely that platforms will increase resources in this area to comply with the duties.
- Employing age assurance technology – in complying with the child safety duties, some higher risk platforms are likely to adopt age assurance (and specifically age verification) technologies.
- Transparency reporting – this relates to producing annual published reports on platform harm and related actions taken by the platform.
- Fraudulent advertising duty (customer due diligence) – as part of complying with the fraudulent advertising duty, it is likely that in-scope platforms will conduct customer due diligence on advertisers.
- User verification and empowerment duties – this relates to the requirement on large social media platforms to offer optional user verification and provide user empowerment tools.
- Assessing impacts on freedom of expression and privacy – this relates to publishing an assessment of impacts on freedom of expression and privacy and keeping this updated.
- Reporting online child sexual abuse (CSA) to designated body – this refers to the cost of reporting identified CSA content to the relevant designated body.
The impact assessment estimates that firms will be ‘familiar’ with 52,000 words of complex requirements after four hours of reading by one staff member. That’s right. No internal or external discussions. No having to bone up on complex ideas. A sustained reading speed of 200wpm throughout.
“For initial familiarisation, based on [relative risk] research, there are approximately 180,000 platforms that could be considered potentially in scope … It is estimated that between 20%-50% of all platforms potentially in-scope would read the regulations … this is approximately 20,000 out-of-scope platforms incurring costs of familiarisation. As with other regulations, it is very difficult to predict with certainty how many firms outside of scope would incur costs of familiarisation – evidence for this within the context of online harms is extremely limited…
“For the initial familiarisation, one regulatory professional at an hourly wage of £20.62 is expected to read the regulations within each business … The explanatory notes are approximately 52,000 words and would therefore take just over four hours based on a reading speed of 200 words per minute. This results in the cost of initial familiarisation of between £3.2 million and £8.0 million.”
As well as it being an insultingly low estimate, did you notice that they’re estimating that many multiples of the companies ultimately affected by the bill will have to read it and understand it before they realise it doesn’t actually apply to them?
It would be a fruitless waste of your time and mine to go through all 108 pages in a similar way, tearing grand-canyon-sized holes in their assumptions. Rest assured that those holes are there. Every estimate is underplayed, and rosier than the colour palette of the Barbie movie.
You’re welcome to read the impact assessment document for yourself if you have any doubts in that regard.
False dichotomies and the law of unintended consequences
So, we have a fatally flawed bill being driven through by ideologues who don’t understand the first thing about it, and reliant on assumptions shakier than a house of cards in a Richter scale 9 earthquake. What could possibly go right?
In parting, let’s talk about the slippery weasel aspect of this whole travesty: the ‘think of the children’ pearl-clutching.
At every stage of the multi-year process, supporters of the bill (mainly the government, but also small-c conservative organisations) have sought to portray those speaking out against it as supportive of all the harms the bill is meant to protect against. Basically: ‘Oh, so you want child abuse material and trolling and death threats and terrorist material and X and Y and Z to proliferate then, do you? What a horrid, horrid person you are.’
As if those were the only two alternatives.
Death stops cancer growing. But you’d be mighty miffed if you were accused of wanting to spread cancer because you tried to stop people killing themselves.
Over to you now. Inform yourself. Inform others. Spread the word. Back up everything that’s important to you on every social media and comms platform you value and put pressure on your MP.