How has the pandemic affected ‘Internet lawmaking’?

There is undoubtedly a significant corona effect for virtual law, for the legal relations of digitality – and this becomes particularly manifest in the digital semi-public, semi-private spaces. Even before Corona, a rapidly growing percentage of communication took place online. Platforms that are formally private communication spaces have gained systemic importance for public discourse. They have become central communication platforms of a free and democratic society. The Internet has had a strong influence on our communicative practice. As the European Court of Human Rights already stated in 2015, the Internet is “one of the most important means by which individuals exercise their right to freedom to receive and impart information and ideas, as it provides […] essential tools for participating in activities and discussions on political issues and questions of general interest” (Cengiz vs. Turkey, 2015).

Can you name some of those effects?

Community building is increasingly taking place online today and Corona has accelerated this. Online communication spaces as communicative settings, in which discourse is relevant for democratic decision-making, but in which relationships are also created and cultivated – the private is political! – have enriched and partly replaced public spaces. This is a challenge for those states that still have the primary responsibility to protect human rights and fundamental freedoms, both online and offline. New forms of mechanised power have emerged. Private actors have also gained power. Their domiciliary rights, their general terms and conditions are the primary yardstick for a large part of online communication. When platforms delete, they delete largely because of their domiciliary rights. Studies suggest that about 95% of all deletions are not carried out for reasons of “(state) law”, but because of private householder’s rights. There are, however, boundaries that were drawn before Corona, but which are particularly important in and after Corona. If by deleting their accounts or comments, users are deprived of “an essential opportunity to disseminate their political messages and to actively participate in the discourse with users of the social network” and their visibility is “considerably impaired”, especially in the context of elections, then platforms must take these users back online (according to the _Bundesverfassungsgericht’, the German Federal Constitutional Court, in its decision in the “III. way” case). Platforms must also treat users equally and may not arbitrarily delete them. The basic rights are partly applicable horizontally between platforms and users. Platforms therefore play an important role in the management and control of information during the pandemic. Chinese platforms cooperated significantly with government messaging (and message control), but US platforms such as Twitter and Facebook, which in the past had taken a “hands-off” approach to certain types of disinformation, also made a U-turn. Facebook, for example, deleted invitations to anti-blockade demonstrations, while Twitter (like other social media platforms) relied heavily on automated filtering.

As after Corona, there is no lack of applicable rules: from international law and regional integration law to state law, from community standards to general terms and conditions. But many users* and some countries disregard the standards that are a prerequisite for meaningful communication. In addition to large-scale information operations using deliberate misinformation and artificial accounts (social bots), hate speech – from discrimination to Holocaust denial – also has a corrosive effect on lawful and ethically stable communication behaviour in online spaces.

Internet law expert PD Dr. Matthias C. Kettemann, LL.M. (Harvard) is head of the research programme “Regulatory Structures and the Emergence of Rules in Online Spaces” at the Leibniz Institute for Media Research | Hans-Bredow-Institut.