Background: The development of online harms legislation

< Back to home

A divergent starting point

Historically, there has been divergence in the laws that either govern the liability of online platforms for harmful content, or impose specific duties on online platforms to stamp out online harms. Many countries already recognise some form of hosting defence, backed up by a notice and take-down procedure, whereby online platforms can avoid civil and criminal liability for illegal or infringing content on their platform by taking it down promptly after becoming aware of it. However, even this core principle of platform liability has been deployed inconsistently on a global basis.

For instance, in the US, section 230 of the Communications and Decency Act extends a defence to online platforms even where they take action in good faith to restrict access to harmful or objectionable content – often referred to as the “Good Samaritan” rule. As such, online platforms operating solely in the US have perhaps had more scope for actively fighting harmful material without the same level of threat of losing their hosting defence by acquiring knowledge of unlawful content on their platforms. By contrast, the current EU hosting defence, as set out in Article 14 of the E-Commerce Directive 2000, is worded in such a way that engagement with content in a manner which is anything more than technical, automatic and passive could render a platform as having knowledge of the content, and therefore liable for it should it be illegal or infringing (although it is worth noting that the proposed EU Digital Services Act looks to tackle this, at least regarding illegal content, by including a provision that the carrying out of voluntary own-initiative investigations by a platform aimed at detecting, identifying and removing illegal content will not, in itself, deny the platform a hosting defence).

An analysis of the interplay between online harms proposals and platform liability is in the Issues section of this report.

Furthermore, some lawmakers have already taken a lead in providing a framework for platforms to tackle online harmful content. In Germany, the well-publicised “NetzDG” law sets out specific requirements that online platforms must meet in terms of blocking access to certain types of illegal content and reporting on their efforts. Many other countries are talking about similar laws. The International Perspective section of this report gives a summary of some of the main existing laws and proposals in certain key territories.

Additionally, the EU’s Audiovisual Media Services Directive, as amended in 2018 (AVMSD), sets out obligations for broadcasters, on-demand programme services and video-sharing platform services to protect minors from video content and video advertising that may impair their physical, mental or moral development. The country-of-origin principle applies to AVMSD, which means that a service provider only needs to comply with the implementation of AVMSD in its country-of-origin. As such, EU Member States have taken different approaches to implementing AVMSD; in particular, those that do not have any major service providers may implement AVMSD with a lighter touch than those that are home to major service providers (for example, Ireland is expected to implement AVMSD’s amendments into online safety legislation that covers all types of services). In the UK, the Government has the additional problem of how to fill the country of origin hole post-Brexit.

Given all of this, online platforms operating internationally have had to deal with a divergent, unclear and – to an extent – contradictory starting point in working out what is required of them in order to combat harmful material.

 

The trigger for change

In recent years, there have been growing calls by the media, legislators and regulators for greater regulation of social media and other user-generated content. This has followed incidents involving harmful content being published in a seemingly unmoderated fashion on several occasions and resulting in actual physical harm, with causes being traced back to online activity through user-generated platforms.

In particular, content platforms have been in the spotlight as a result of being used as a means for:

  • inciting terrorist activity;
  • child sexual exploitation; and
  • manipulating major elections.

Even before the COVID-19 crisis, there was a fear that online platforms could be used as a way of spreading misinformation. But the crisis has emphasised how quickly potentially harmful misinformation can be spread, and how opportunists can play on individuals fears and vulnerabilities for commercial gain, such as through the selling of unauthorised drugs or medical devices.

The debate has been characterised by the conflicting need to balance the privacy and safety rights of individuals against the right to free expression, and the need to protect those who may be vulnerable to new forms of harm which the legal system has not yet sufficiently developed to protect against.

Already in early 2019 the upper chamber of the UK legislature’s Committee on Communications had stated in its report on digital regulation that ‘Online platforms have developed new services which were not envisaged when the e-Commerce Directive was introduced’ (…) ‘Notice and takedown’ is not an adequate model for content regulation. Case law has already developed on situations where the conditional exemption from liability under the e-Commerce Directive should not apply. Nevertheless, the directive may need to be revised or replaced to reflect better its original purpose.’

While online platforms and social media companies have made energetic efforts to prevent harmful content, the variation between different states’ approaches and expectations to counter these concerns worldwide makes it difficult to achieve a comprehensive picture of what the future regulatory landscape will look like.

 

The need for balance to protect free speech and innovation

Online platforms have always been aware of the need to protect freedom of speech (which is a principle which underpins the reason that many of these platforms were set up in the first place).

Likewise, over-moderating content to be shared on user content platforms runs the risk of stifling the sharing of useful information and, ultimately, innovation. Many tech platforms pride themselves on being a forum through which innovation can be recognised and/or rewarded. Many experts have warned of such risks, for example Dr Paul Bernal of East Anglia University has warned that:

‘there could easily be a chilling effect on freedom of speech if it is taken too far (…) a platform may be cautious about hosting, reducing the opportunities for people to find places to host their material, if it is in any way controversial.’

Balancing these issues with the protection of vulnerable users is difficult. Relying too much on notice and takedown mechanisms risks not being alert and proactive in stamping out content which can harm user trust in these platforms. But limiting content beyond what is clearly illegal risks impacting freedom of speech and innovation. Furthermore, in practice there are so many grey areas where the scope for harm by certain types of content is subjective. The balancing exercise has been made particularly hard by the varied expectation of law makers in different areas of the world.

Many academic and industry commentators consider that while prescriptive laws requiring content filtering and moderation could have benefits, such as transparency and judicial and democratic accountability, in practice any systemic content filtering requirements are generally likely to infringe fundamental rights. The key risks are around legal and justified use of certain content being mistaken by ‘one size fits all’ content filtering requirements imposed by law-makers with a poor understanding of how content platforms are run and the subtleties of the technologies involved.

In our view, the traditional tension arising where rules to protect the vulnerable few may result in derogations from the fundamental rights of the many is particularly pronounced in the case of platform regulation.

 

 A move towards focusing on responsibility, not liability

Although the rules emerging in different jurisdictions, both through enacted laws and legislative proposals, are not generally aligned, it is possible to detect general trends across the board. There appears to be, for example, a move towards:

  • imposing a greater responsibility on platforms for tackling online harms through technical and organisational measures, as opposed to focussing on liability; and
  • encourage the industry to develop its own procedures and standards for speed of take-down and transparency.

There are several initiatives across different jurisdictions to establish standards of compliance which platforms will be required to demonstrate that they meet. The rest of this report will look at these initiatives and how these are likely to affect the approaches that online platforms will need to take.

< Back to home