The social media platform X, formerly known as Twitter, has just received a stiff warning from the EU about its apparent failure swiftly to remove deeply harmful disinformation from its site.
X was found by the EU to have the highest ratio of disinformation posts of all large media platforms and the EU was especially alarmed by the flood of disinformation on X as Hamas attacked Israel.
Disinformation: Musk tests EU resolve
In an open letter on 10 October, EU Commissioner Thierry Breton warned Elon Musk that X faced heavy fines if it failed to remove illegal content promptly. This was published on X.
“Public media and civil society organisations widely report instances of fake and manipulated images and facts circulating on your platform in the EU, such as repurposed old images of unrelated armed conflicts or military footage that actually originated from video games. This appears to be manifestly false or misleading information.”
Breton gave Musk 24 hours to respond to the EU’s urgent request that he act appropriately, inform the EU what crisis measures he had put in place, and contact and respond to relevant law enforcement authorities and Europol’s requests. He ended with a reminder that the EU could levy heavy penalties should X be found to be in breach of the relatively new EU Digital Services Act.
Musk seemed to see this as an opportunity to test the EU’s resolve. He replied asking that the EU “list the violations you allude to on X, so that the public can see them”. Breton was reported to have told him that he (Musk) was “well aware of your users’ – and authorities’ – reports on fake content and glorification of fake content”.
This public rebuke and warning to X has come as the EU’s distaste for X’s role in spreading disinformation and harmful content amplifying extreme views has grown. X has become an outlier among the big social media platforms in respect of new EU laws brought in to curb harmful content.
Play fair | Be honest | Remove ‘fake news’
The EU seeks to ensure that big social media platforms do not act as a conduit for disinformation. Disinformation is defined as verifiably false or misleading information created, presented and disseminated for economic gain or to intentionally deceive the public, and liable to cause public harm.
As such, it is seen as posing significant threats to democracy, freedom, human rights, and our general way of life. States like Russia and China are believed to be the source of much disinformation and harmful content designed to undermine and divide Europe, and have been found to fund bots and use generative artificial intelligence (AI) to create and disseminate disinformation.
EU concerns are magnified by what fact checking has revealed along with steps taken against journalists to suppress independent reporting. The EU’s draft European Media Freedom Act has yet to be adopted, and the member governments share the EU commissioner’s concern over how AI can create and magnify disinformation intended to deceive the public. But some member governments, like Hungary’s, have challenged calls for strong responses to try and protect European democracies.
However, there is broad consensus that disinformation has damaged the public. Officials and politicians repeatedly warn about the distortions of opinion and trashing of expert views, referring to Covid, the war in Ukraine, Brexit, social media actions in relation to recent and upcoming elections in Slovakia and Poland, and extremist online content.
The EU is acutely aware of the dangers posed to fair elections in the member states, and also to the prospective elections to the European parliament early next summer. Trying to impede the manipulation and distortion of election results is but part of the EU’s wider attempt to combat the dissemination of disinformation, harmful content, manipulated and misappropriated images to citizens. The EU is not alone in seeing AI-generated disinformation as a threat to civil society. The UN, too, sees this as hostile actors, sometimes criminals, seeking to undermine public trust in legitimate authorities and experts. As part of the toolkit to tackle them, both advocate transparency, accountability and openness to scrutiny.
Be open | Share | Report
This approach is broadly shared by all major online platform signatories of the EU code of practice on disinformation, including Google, Meta, Microsoft and TikTok, all of which have provided the first of six monthly reports to the EU on the implementation of the code of practice.
Google reported that in the first half of this year, it stopped over €31mn in advertising going to disinformation actors, and acted over 141,000 questionable political ads. Between January and April 2023, YouTube terminated 411 channels and ten Blogger blogs involved in coordinated influence efforts linked to the Russian state sponsored Internet Research Agency.
TikTok reported that it removed over 140,000 videos accounting for over one billion views for infringing misinformation policy. It fact-checked 832 videos related to the war in Ukraine, and removed 211 of them as a result. Microsoft noted that it had blocked the registration of over 6.7 million fake LinkedIn accounts.
While X refused to sign up to the code of practice, it cannot evade its legal obligations under the EU’s Digital Services Act, as X operates in Europe.
If X continues to ignore its legal obligations, the EU could fine it up to 6% of its global annual turnover, and exclude it from operating in the EU single market.
Following Breton’s letter to Musk, he subsequently wrote to TikTok CEO Shou Zi Chew to address the circulation of disinformation on the platform related to the rapidly deteriorating situation in Israel:
“Following the terrorist attacks carried out by Hamas against Israel, we have indications that TikTok is being used to disseminate illegal content and disinformation in the EU.
“Let me remind you that the Digital Services Act sets very precise obligations regarding content moderation … given your platform is extensively used by children and teenagers, you have particular obligation to protect them from violent content depicting hostage taking and other graphic videos which are reportedly widely circulating on your platform, without appropriate safeguards.
“ … you need to have in place proportionate and effective mitigation measures to tackle the risks to public security and civic discourse stemming from disinformation. As many users, particularly minors, turn to your platform as a source of news, reliable sources should be adequately differentiated from terrorist propaganda.”
TikTok responded by saying it had launched a command centre to look at the latest conflict, updated its automated detection systems to look for graphic and violent content, and added an unspecified number of moderators who speak Arabic and Hebrew.
19.10.23: The latest update on this story is the news that Musk is now considering withdrawing X from the EU area completely. Watch this space!