This piece was written in my capacity as the Free Speech Union’s Communications Officer. The Free Speech Union exists to protect those who’ve been cancelled, harassed, sacked or penalised for exercising their legal right to free speech whether in the workplace or the public square. Please take a look at the great work the organisation does - our Twitter account is here, and if you’d like to join us as a member you can do so here.
The European Union’s (EU) Digital Services Act (DSA) came into force on Friday 25th August, establishing a regulatory framework that critics have likened to an “incoherent, multilevel censorship regime” (LA Times) that will have a “chilling effect on free speech” (Spiked), “momentous and disastrous implications for freedom of speech worldwide” (Brownstone) and will ultimately cause “the death of free speech online” (Spiked).
The DSA is designed to function in combination with the EU’s so-called ‘Strengthened Code of Practice on Disinformation’ (the Code), which now requires online platforms with more than 45 million monthly active users – i.e., companies like Facebook, Twitter and Instagram – to swiftly censor ‘mis-’ and ‘disinformation’, and provide regular updates – or ‘transparency reports’ – on all the work they are doing “to fight disinformation”, as the EU’s new ‘transparency centre’ website puts it.
Thanks to the DSA, the European Commission (EC) also has at its disposal an aggressive enforcement regime, such that if Big Tech companies fail to abide by the Code, they can be fined up to 6% of their annual global revenue, investigated by the Commission, and potentially even prevented from operating in the EU altogether.
Unfortunately, the fact the UK is now no longer an EU member-state won’t do much to protect social media users in this country from this new law. The so-called ‘Brussels Effect’, whereby firms wishing to operate in the world’s second-largest market quickly come to apply the EU’s strict regulatory standards outside as well as inside the EU is well-known. (Witness the evolution of GDPR, from newly minted EU regulation in 2018, to global standard today.) What’s particularly striking about the first raft of transparency reports submitted to the EC earlier this year is their focus on fighting disinformation globally: i.e., Big Tech’s efforts to meet the EC’s censorship expectations do not only affect the accounts of users based in the EU, but of users all around the world.
So who is to say if something is dis- or misinformation? In the case of social media platforms operating within the EU, the EC’s unelected bureaucrats are the arbiter of that, since it is the Commission that will decide if platforms like Facebook are doing enough to combat it. (It is the EU’s executive body, the EC, that is invested by the DSA with the exclusive power to assess compliance with the Code and apply penalties if a platform is found wanting.)
And what kind of speech is the DSA expected to police? The Code defines disinformation as “false or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harm”. That sounds innocent and apolitical enough. Yet the European Digital Media Observatory (EDMO), which was launched by the EC in June 2020 and aims to “identify disinformation, uproot its sources or dilute its impact”, appears to adopt a much broader, deeply politicised understanding of the term “misleading content”.
Consider, for instance, some of the “disinformation trends” listed in EDMO’s recent 2023 briefing on disinformation in Ireland. They include “nativist narratives” that “oppose migration” (bye bye Nigel Farage) and “gender and sexuality narratives” that touch on trans issues as “part of a wider ‘anti-woke’ narrative that mocks social justice campaigns and efforts to promote diversity and inclusion” (farewell Titania McGrath). In the section on “science and environment”, posts in which “Greta Thunberg is a frequent target for abusive language and humour relating to climate change” are cited as disinformation, as are those in which “Met Éireann [the state meteorological service] weather warnings are viewed with suspicion” (au revoir to the Global Warming Policy Foundation).
As Laurie Wastell points out for the European Conservative, what is common to these supposedly ‘harmful’ narratives is not that they contain ‘disinformation’ in the sense outlined in the Code (i.e., “false information intended to mislead”). Rather, they represent opposition by the public to unpopular policies favoured by elites – in this case, mass migration, transgender ideology and Net Zero eco-austerity.
That’s concerning given that the EDMO sits on the Code’s ‘Permanent Taskforce’ (stated aim: keeping the Code “future-proof and fit-for-purpose”), and nearly every company that submitted a transparency report to the EC earlier this year cited EDMO as a partner organisation.
In the words of EC President Ursula von der Leyen, the EU considers it vital that companies censor disinformation of the kind identified by EDMO to “ensure that the online environment remains a safe space”.
Safe for whom, one wonders – politicians or citizens?