The issues: what online platforms need to resolve

< Back to home

This section takes a deeper dive into some of the issues that are likely to arise as a result of developing online safety laws and what online platforms will need to keep in mind when implementing and refining processes to deal with them.

There is no doubt that there is a general movement, at international level, toward a greater degree of protection against online harms. Very broadly, there is a concurrent theme across the main territories which are seeking to make online harms an area of further regulatory focus of ensuring that platforms have in place specific measures and policies, which they actively enforce against, in order to promptly remove harmful material. Nevertheless, the exact requirements on a country-by-country basis are nuanced. In the absence of harmonisation across groups of countries, such as the EU, there is an inevitable issue that platforms face of having to work out how best to adopt differing regulatory regimes aimed at preventing online harms. Indeed, the EU Digital Services Act proposal is partly in response to the risk of fragmentation between Member States in their approaches.

Given the international nature of content-sharing platforms, it is likely that they could ultimately risk parallel regulatory investigations by national regulators, where either specific content, or broader content moderation practices, are deemed to breach the standards required in a particular territory. This precedent has already been seen in, for example, the US Department of Justice’s, European Commission’s and Australian Securities and Investments Commission’s investigations of practices relating to interbank offered rates spanning across different jurisdictions.
Based on the range of initiatives detailed in this report, and the fact that we do not yet see a unified approach internationally (or even across Europe), it is likely that platforms offering content sharing services will require dedicated compliance functions to support the management of such cross-jurisdictional requirements. In particular:

  • the reporting requirements in the different jurisdictions are unlikely to be homogenous, however efficiency savings could be made by a platform centralising its global compliance function (to a degree), so that materials prepared to fulfil the different reporting functions can be amended and re-utilised in different jurisdictions;
  • technical tools preventing content being viewed in a particular jurisdiction based on visiting IP addresses will need to be leveraged to enable compliance with the regulatory standards applicable in respective jurisdictions (particularly in relation to any specifically prohibited classes of material, such as “hate speech”);
  • there will be a need for internationally aligned platform policies on responding to and engaging with bodies regulating online content;
  • when weighing up the duty to proactively combat harmful content versus the scope for inadvertently waiving any sort of “hosting defence”, online platforms will need to be aware that such hosting defences may not be available in all jurisdictions, or may be nuanced in the way that they can be relied upon. The Boundaries of Platform Liability section of this report touches on this further.
  • platforms will need to consider coordination in relation to any restrictions imposed on a periodic or time-limited basis in certain jurisdictions (such as those which may be applicable during electoral campaigning cycles in France); and
  • future technical requirements, such as proposed user tools to flag or notify categories of regulated content, should be anticipated and planned for.

Reporting to regulators

The issue of reporting to regulators on efforts made to stamp out online harms will be an increasingly tricky issue for platforms to navigate. The sheer volume of content that goes through the moderation process on many online platforms makes it a very difficult task to report accurately even when a platform decides to report in just one uniform manner. However this is clearly made all the more tricky where there are differing reporting requirements to different regulators in different countries. Enforceable reporting requirements will be a significant step-up from the voluntary commitments currently made under various previous initiatives such as the EU Code of Conduct on countering illegal hate speech online. The concept of reporting on online harms seems to be a common theme in existing laws or proposals for online harms laws. For instance:

  • in Germany, NetzDG already requires German-language reports to be published twice a year, setting out resolution times for take-down of harmful content and other statistics relating to the report handling procedure (and the proposed amendment would extend this requirement to oblige platforms to give details of machine learning algorithms used as part of the platform’s wider content moderation effort);
  • in the UK, the government’s proposed Online Safety Bill will likely propose that the major online platforms will need to provide annual transparency reports to the regulator, Ofcom, outlining the prevalence of harmful content on their platforms and what measures are being taken to address these;
  • the EU Digital Services Act proposes that detailed reports are to be published at least once a year by online intermediaries detailing, amongst other things, the number and type of take-down notices received and the action taken as part of the platform’s content moderation processes; and
  • in India, the proposed amendment to the Information Technology (Intermediaries Guidelines) Rules 2011 contemplates platforms keeping a record of unlawful activities on their platform for a specified period of 180 days.

In some cases, the type of platform on which such obligations will fall will vary between countries, depending on the scope of the law at hand. Nevertheless, it seems likely that many platforms will have to contend with numerous reporting requirements on a country-by-country basis which require distinct information and need to be reported on at different times. Platforms will have to decide whether this is operationally feasible and whether to comply, or to look to lobby local online harms regulators with the aim of agreeing a set of consolidated reporting tools which the platform is comfortable can be deployed across numerous countries.

 

Pre-empting potential fragmentation

While the French Avia law initiative failed, it may tell us something about the kinds of country-specific requirements that could arise in future legislative proposals. The Avia law (the majority of which was struck down by the Constitutional Court for incompatibility with fundamental rights) included a requirement to designate a natural person located on French territory to act as contact person responsible for receiving requests from the regulator. In addition, the fact that the French government launched such an initiative in parallel with the EU’s Digital Services Act consultation shows that in spite of centralised harmonisation efforts in Europe, individual Member States are likely to pursue their own agendas.

In spite of the potential fragmentation of requirements across different jurisdictions, one trend appears to be the introduction of voluntary codes of practice or conduct, so that progressive compliance and engagement with non-binding initiatives is likely to be an effective means of pre-empting a new binding regime in a particular jurisdiction (or within the EU). Keeping an eye on such initiatives and pro-actively seeking to participate or engage with them is likely to be the most effective strategy in adapting existing systems so that they are best placed to meet any future regulatory requirements.

As international online harms developments begin to place a greater emphasis on wider duties of care to prevent harmful content, and perhaps even on more proactive detection and take-down procedures, the role of Artificial Intelligence (“AI”) and machine learning becomes even more important.

As AI algorithms become more sophisticated, more platforms are already turning to technology to ensure that different forms of content meet their standards, promote compliance with global regulations and avoid harm to their users. AI-based systems are being positioned to supplement human content moderation processes, and this seems like a trend which will continue.

 

The challenge of the volume of content

Large online platforms often receive millions of unique submissions each week, and increasingly diverse mixtures of content add pressure to a moderation process which needs to quickly recognise and respond to potentially harmful content. Each day, around 576,000 hours of content are uploaded to YouTube, and users watch more than a billion hours of video. This volume of content represents an insurmountable challenge to even the financially-capable companies in the world. Enormous amounts of money are invested into developing the tools to assist human moderators, not least to also minimise the amount of harmful content which individual human monitors are themselves exposed to personally – indeed, some online platforms are all too aware of the scope for inadvertently causing psychological harm to moderators who are asked to review the most objectionable material.

 

How AI can assist

The role of AI is likely to differ depending on content type. Nevertheless, it is likely to continue to play a vital role  in content moderation. For instance:

for text-based content:

  • machine learning models can be used to review contributions by looking at a vast combination of data points. On a basic level, reviews can be filtered by specific words or phrases, such as profanity or generally unhelpful language;
  • patterns of submissions can also be tracked across accounts and locations on site, to find users or locations which are promoting fake or harmful content;
  • subject to applicable data protection (and similar) laws, IP addresses and geolocation data can be used to track harmful content submission across devices and even websites. For instance, in the context of online reviews, this could be useful to identify whether it is a subject’s family member who is posting positive reviews for their restaurant, and negative reviews for their competitors.

for image and video content:

  • images, although more technically complex to assess using an algorithm or AI process, can be assessed in broadly similar way to text based submissions. Platforms can set up categories and filters for what is deemed acceptable and unacceptable;
  • visual recognition tools currently employed by platforms are exceptionally good at making detailed identifications of objects or activities, such as the make or model of weapons, or facial expressions and body positions suggestive of violence; and
  • the audio accompanying a video can be parsed by software tools to recognise gunfire, or non-speech noises indicative of violence and other prohibited content.

 

The limitations of AI and the continued role of human intervention

Despite this, it is commonly accepted that current algorithms are far from perfect. False positives and negatives are still commonplace. Certain harmful content requires an understanding of the context around it to determine whether it is a breach of platform standards. For instance, language may be used between adults which would not commonly be regarded as harmful, however when used in the context of online bullying of a child, the same words can have a very different effect. Moderation of this content by AI systems requires an understanding of the context and history of interactions between users, allowing innocuous and harmful content to be distinguished.

Similarly, not all violent content is created equal. A video of the recent protests in the USA, or Belarus, although containing violence, might be something that a platform or society as a whole wishes to publicise. There is a balance to be found between gratuity and documentary evidence.

Dealing with these issues is not outside the scope of AI moderation tools, and it would likely be simplistic to suggest that the “binary” nature of an AI tool’s decision making cannot be trained or tailored to be sophisticated enough to respond to contextually complex scenarios. Moderation tools can even be taught to identify the different types of harmful content which a platform might need to deal with, for instance, fake positive reviews against fake negative reviews, or to identify content which might be an ad, despite being uploaded as a standard post or review.

Contemporary moderation and analysis tools are extraordinarily robust. It has been reported that some technology companies now have tools powerful enough to automatically scan all live streams on all major social media platforms in real time as they are broadcast. This enormous amount of content can be filtered and assessed as it is uploaded, and automatically flagged as potentially harmful. This does not necessarily mean that the role of human moderators will become a thing of the past. Indeed, in Germany, one proposed amendment to the NetzDG law implies a right for a user’s content to be subject to human decision-making, at least as part of an appeal/counter-notice process. And in EU data protection law, there are potential pitfalls in terms of making entirely automated decisions where personal data is involved; any such processing brings up issues around the lawful basis for processing the data and transparency. Nevertheless, having machine-led functions to carry out a broad initial analysis, which can then potentially be backed-up by human moderation, seems like a necessity for larger platforms.

As the expectations of lawmakers increase, it is likely that platforms will continue to make exerted efforts to introduce and refine new technologies to assist them in eliminating harmful content from their platforms at the earliest opportunity.

The imposition of responsibilities on platforms to take more responsibility for preventing online harms brings up some interesting considerations about whether actively monitoring for harmful content in discharge of online harms responsibilities could impact the ability of the platform to rely on a hosting defence.

 

The fundamentals of the hosting defence

In Europe, Article 14 of the E-Commerce Directive (2000/31/EC) allows information society services (which would include most online platforms) to avoid liability for illegal activity or information on their site where (a) they do not have actual knowledge of illegal activity or content and, where a claim for damages is concerned , they are not aware of facts or circumstances from which it would have been apparent that the activity or information was unlawful and (b) upon obtaining actual knowledge or awareness of the illegal activity or content, they act expeditiously to remove or to disable access to it. This hosting defence is relied upon heavily by platforms which give users the ability to post user-generated content. More often than not, robust notice and takedown mechanisms are in place to ensure unlawful content can easily be reported by users, investigated by the platform and taken down if deemed to be illegal or against the platform’s own acceptable use policies or community standards.

 

Pre-emptive scanning vs notice and takedown

We should make it clear that most of the proposals that we have referred to in this report do not automatically remove or override a hosting defence. Indeed, there is no reason why increased duties on online platforms to remove and report on harmful content cannot sit alongside their existing notice and takedown procedures, without platforms incurring any further liability. For example, the hosting defence is widely regarded as still being fully available to video sharing platforms which implement measures to meet the requirements of the EU’s Audiovisual Media Services Directive.

Still, historically platforms have had to face up to the risk that implementing proactive mechanisms to better scan for, identify and remove harmful content (rather than relying on notice and takedown procedures alone) could risk them losing their hosting defence. If such monitoring were to cause illegal content to be assessed, but not removed – either because this content slips through the net in terms of being flagged for removal, or is deemed not to meet an obvious threshold of illegality upon initial inspection – then the platform carrying out the monitoring could be deemed to have obtained knowledge of the illegal content but failed to have taken it down, thereby potentially extinguishing the hosting defence in respect of that content. Indeed, the current E-Commerce Directive in Europe appears to focus on the passivity of the platform, and the lack of knowledge that comes from such passivity, in order to be able to rely on the hosting defence – Recital 42 of the E-Commerce Directive makes it clear that liability exemptions will only persist where the activities have a “mere technical, automatic and passive nature”.

However, it seems very likely that platforms will be expected to ramp up their content scanning processes in order to pre-emptively identify and remove harmful content as part of various global online harms laws or proposals. For many of the big players, such as social media platforms, this is not a new concept. Despite the risks outlined above, and even though Article 15 of the E-Commerce Directive specifically prohibits Member States from imposing either a general obligation to monitor information transmitted and stored, or a general obligation actively to seek facts or circumstances indicating illegal activity, most major social media and content sharing platforms have in place at least some sort of automatic scanning functionality to search for and remove obviously illegal or harmful content.

Ultimately, there are commercial and reputational benefits to be had by platforms in ensuring that they provide a safe, friendly and honest place for users to interact. Scanning technologies deployed tend to use automated tools which search for key words or IP breaches and often rely on an element of artificial intelligence to grow their decision making capabilities in order to more accurately determine what constitutes unlawful content. Generally speaking, platforms have not been deemed to have knowledge of content if the content scanning is carried out at a very high level. But as online harms regimes exert further pressure on platforms to build out even more sophisticated mechanisms for proactively identifying and removing harmful content, the risk increases that vetting or scanning process may end up being more than just passive and automatic, risking the availability of the hosting defence.

This is less of an issue for platforms operating in the United States. There, section 230 of the Communication Decency Act – the US equivalent of the hosting defence for non-copyright related infringements – carries a “Good Samaritan” rule which helps to encourage platforms to take the initiative to try to block objectionable content at an early stage. This rule states that platforms will not be held liable for any action taken in good faith the restrict access to or availability of material that the platform considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable. Platforms can therefore implement systems to check for such content, without fear of losing their immunity.

 

A European shift from liability to responsibility?

Thankfully for platforms operating in Europe, the proposed Digital Services Act (“DSA”) reproduces the existing hosting defence, and supplements it with a similar “Good Samaritan” provision. If this is included in the final version, it would mean that platform providers would not lose their liability protections simply by implementing measures to detect, identify and remove illegal content and thereby taking a more active role in this process.

The DSA also introduces a framework setting out responsibilities and accountability for platform providers, including notice and take-down procedures, annual reporting on moderation, and the procedures for the handling of complaints and disputes. This framework only applies to illegal content or activity, and therefore doesn’t extend as far as some proposed online harms regimes which aim to go further than this, for instance in the UK where the Online Safety Bill seeks to impose a “duty of care” on digital platforms not just in relation to unlawful online content, but also content that may be lawful but which “gives rise to a reasonably foreseeable risk of a significant adverse physical or psychological impact on individuals”.

Nevertheless, the introduction of a “Good Samaritan” rule will likely make platforms in Europe feel more comfortable about taking the proactive measures to weed out and deal with harmful content on their platform in order to comply with global online harms responsibilities, knowing that the all-important hosting defence should remain intact should any instances of illegal content be missed.

 

Risk of follow-on class actions?

Both the European Commission and governments at a national level have been careful to distinguish between liability for fines for failing to implement appropriate procedures and liability for specific pieces of content, in relation to which the current Article 14 defence and its successor in the DSA will provide some protection against such liability. However, a growing number of countries around Europe are providing procedural mechanisms for collective redress. Further, claimant lawyers and their funders are developing expertise about the practices of internet platforms, particularly through the data protection regime. This begs the question of whether the increased regulatory findings against internet platforms will lead to follow-on civil litigation. For example, would a regulatory finding that a platform did not apply its own terms or failed to apply appropriate moderation procedures assist individuals who wish to bring civil claims?

In our view, this is a risk that needs to be carefully guarded against and born in mind by legislators. It is not hard to imagine examples of online content that could cause damage to many individuals at the same time in a short space of time. There is a risk for platforms that claimant lawyers will seek to reformulate privacy and defamation claims as contractual or negligence claims, relying on regulatory findings of failure to support such claims.

This and other risks are a great reason why platforms need to think very early about the direction of travel of online safety laws so that they can get ahead of the game.