CAN PLATFORMS CANCEL POLITICIANS?

An international study led by the HBI provides a first comparative overview of how societies and governments in 15 countries react to private power over political actors.

by Martin Fertmann, Matthias C. Kettemann and Mara Barthelmes
 

Private power over public speech is often contested, but the conflicts are magnified when this power is asserted over parties, political candidates and office holders who are focal points of public debates. Although most platform terms of use and their enforcement systems are global, opinions differ on the possible preferential treatment of speech by prominent politicians and office holders according to national political and legal contexts.

In a recently published study, we examined the interplay between these national rules and global private ordering systems by synthesizing answers to nine questions submitted by more than 30 researchers from 15 countries within the GDHR Network. Our studies provides a first empirical and comparative overview of how societies and governments conceive of and react to private power over political actors.

From January 6, 2021, onwards, Internet platforms like Twitter, Facebook and Instagram, YouTube, Twitch and Snapchat suspended the accounts and channels of Donald Trump and his supporters. But not only the platforms as top layer of the Internet reacted, financial service providers, App stores and even email service providers and dating apps took similar action. The response to Trump’s suspension as the first and most discussed action by platform companies against an (albeit: exiting) head of state from the global North provided an insight into the state of debates on platform governance in the participating countries.

This “deplatforming” has shown one important point very clearly: platforms can intervene (and remove content and users) very effectively if they want to. Even during the U.S. election campaign, they limited algorithmic recommendations, banned political ads, demonetized and deamplified problematic content. The platforms (re)discovered that the fight against hate speech and disinformation, especially in the context of the fight against Corona, also appeals to politicians and customers. 

Private vs. state regulation

The study revealed that in the ongoing debate about who should set and enforce the rules for speech on online platforms, terms of service-based measures against political and state actors – as both key subjects and objects of political opinion formation – have become a focal point. State regulation of platforms creating an obligation to spread information is regarded, with minor national differences, as dangerous for the free and unhindered discursive opinion-forming process.

Similarly, the exercise of content moderation policies by the major platform companies is seen critically by politicians in all countries examined. Most politicians in European countries emphasise fundamental rights and demand that such decisions should be made by states, not by private companies. Such positioning, however, is in unresolved conflict with the constitutional realities of the participating countries, according to which office holders mostly cannot invoke fundamental rights when acting in their official capacity and where laws with “must-carry” requirements for official information do not exist for social media and would probably only be constitutional for narrowly defined, special circumstances such as disaster prevention.

The increase in power of platforms over societal discourses that is accompanied by them restricting democratically elected office holders or setting the terms under which political campaigns can take place challenges our understanding of who should set standards for political speech. Conversely, state interference in the way official information is or is not spread by private actors is not just met with scepticism but is also under significant constitutional constraints in many countries.

Creating new hybrid actors

The referral of the Trump decision to Facebook’s Oversight Board has launched a broader debate on institutional structures to improve content governance by creating hybrid actors such as independent “Social Media Councils”. Such institutions would not be constructed entirely from the perspective of either companies or states but would incorporate elements of external societal input with a degree of independence from both states and companies.

Media commentators in participating countries interpreted the deplatforming against Trump as a signal that far-right parties and politicians around the world may face increasing scrutiny. Meanwhile, conservative politicians and governments in several participating countries instrumentalized the actions against Trump as alleged evidence of the platform’s bias against conservative opinions. Although in most cases there are no specific legal requirements for content moderation, contributions from several countries point to a general – often constitutional – privileging of the expression of politicians and office holders. This could potentially support or even force platforms’ decision to allow content from political actors, even if it violates their terms of use.

The discussion about the influence of social media on public discourse is still in its infancy. While the de-platforming of Donald Trump may not have been the origin of this debate, it has definitely added fuel to it.