Key Issues (page style)

Contents

 


Managing conflicting requirements between jurisdictions

There is no doubt that there is a general movement, at international level, toward a greater degree of protection against online harms. Very broadly, there is a concurrent theme across the main territories which are seeking to make online harms an area of further regulatory focus of ensuring that platforms have in place specific measures and policies, which they actively enforce against, in order to promptly remove harmful material. Nevertheless, the exact requirements on a country-by-country basis are nuanced. In the absence of harmonisation across groups of countries, such as the EU, there is an inevitable issue that platforms face of having to work out how best to adopt differing regulatory regimes aimed at preventing online harms.  Indeed, the consultations on the Digital Services Act package is partly in response to the risk of fragmentation between Member States in their approaches.

Given the international nature of content-sharing platforms, it is likely that they could ultimately risk parallel regulatory investigations by national regulators, where either specific content, or broader content moderation practices, are deemed to breach the standards required in a particular territory. This precedent has already been seen in, for example, the US Department of Justice’s, European Commission’s and Australian Securities and Investments Commission’s investigations of practices relating to interbank offered rates spanning across different jurisdictions.

Based on the range of initiatives detailed in this report, and the fact that we do not yet see a unified approach internationally (or even across Europe), it is likely that platforms offering content sharing services will require dedicated compliance functions to support the management of such cross-jurisdictional requirements.  In particular:

  • the reporting requirements in the different jurisdictions are unlikely to be homogenous, however efficiency savings could be made by a platform centralising its global compliance function (to a degree), so that materials prepared to fulfil the different reporting functions can be amended and re-utilised in different jurisdictions;
  • technical tools preventing content being viewed in a particular jurisdiction based on visiting IP addresses will need to be leveraged to enable compliance with the regulatory standards applicable in respective jurisdictions (particularly in relation to any specifically prohibited classes of material, such as “hate speech”);
  • there will be a need for internationally aligned platform policies on responding to and engaging with bodies regulating online content;
  • when weighing up the duty to proactively combat harmful content versus the scope for inadvertently waiving any sort of “hosting defence”, online platforms will need to be aware that such hosting defences may not be available in all jurisdictions, or may be nuanced in the way that they can be relied upon. The Boundaries of Platform Liability section of this report touches on this further.
  • platforms will need to consider coordination in relation to any restrictions imposed on a periodic or time-limited basis in certain jurisdictions (such as those which may be applicable during electoral campaigning cycles in France); and
  • future technical requirements, such as proposed user tools to flag or notify categories of regulated content, should be anticipated and planned for.

Reporting to regulators

The issue of reporting to regulators on efforts made to stamp out online harms will be an increasingly tricky issue for platforms to navigate. The sheer volume of content that goes through the moderation process on many online platforms makes it a very difficult task to report accurately even when a platform decides to report in just one uniform manner. However this is clearly made all the more tricky where there are differing reporting requirements to different regulators in different countries. Enforceable reporting requirements will be a significant step-up from the voluntary commitments currently made under various initiatives such as the EU Code of Conduct on countering illegal hate speech online. The concept of reporting on online harms seems to be a common theme in existing laws or proposals for online harms laws. For instance:

  • in Germany, NetzDG already requires German-language reports to be published twice a year, setting out resolution times for take-down of harmful content and other statistics relating to the report handling procedure (and the proposed amendment would extend this requirement to oblige platforms to give details of machine learning algorithms used as part of the platform’s wider content moderation effort);
  • in the UK, the government’s consultation response following its Online Harms White Paper envisages annual transparency reporting to the regulator. Details about specific requirements are unclear for now, except that the reports would need to outline the prevalence of harmful content on the platforms and what measures are being taken to address these, and the reporting requirements may vary based on the type of service being provided; and
  • in India, the proposed amendment to the Information Technology (Intermediaries Guidelines) Rules 2011 contemplates platforms keeping a record of unlawful activities on their platform for a specified period of 180 days.

In some cases, the type of platform on which such obligations will fall will vary between countries, depending on the scope of the law at hand. Nevertheless, it seems likely that many platforms will have to contend with numerous reporting requirements on a country-by-country basis which require distinct information and need to be reported on at different times. Platforms will have to decide whether this is operationally feasible and whether to comply, or to look to lobby local online harms regulators with the aim of agreeing a set of consolidated reporting tools which the platform is comfortable can be deployed across numerous countries.

Pre-empting potential fragmentation

While the French Avia law initiative failed, it may tell us something about the kinds of country-specific requirements that could arise in future legislative proposals.  The Avia law (the majority of which was struck down by the Constitutional Court for incompatibility with fundamental rights) included a requirement to designate a natural person located on French territory to act as contact person responsible for receiving requests from the regulator.  In addition, the fact that the French government launched such an initiative in parallel with the EU’s Digital Services Act consultation shows that in spite of centralised harmonisation efforts in Europe, individual Member States are likely to pursue their own agendas.

In spite of the potential fragmentation of requirements across different jurisdictions, one trend appears to be the introduction of voluntary codes of practice or conduct, so that progressive compliance and engagement with non-binding initiatives is likely to be an effective means of pre-empting a new binding regime in a particular jurisdiction (or within the EU).  Keeping an eye on such initiatives and pro-actively seeking to participate or engage with them is likely to be the most effective strategy in adapting existing systems so that they are best placed to meet any future regulatory requirements.

< Back to top

 


Proactive moderation and use of AI

As international online harms developments begin to place a greater emphasis on wider duties of care to prevent harmful content, and perhaps even on more proactive detection and take-down procedures, the role of Artificial Intelligence (“AI”) and machine learning becomes even more important.

As AI algorithms become more sophisticated, more platforms are already turning to technology to ensure that different forms of content meet their standards, promote compliance with global regulations and avoid harm to their users. AI-based systems are being positioned to supplement human content moderation processes, and this seems like a trend which will continue.

The challenge of the volume of content

Large online platforms often receive millions of unique submissions each week, and increasingly diverse mixtures of content add pressure to a moderation process which needs to quickly recognise and respond to potentially harmful content. Each day, around 576,000 hours of content are uploaded to YouTube, and users watch more than a billion hours of video. This volume of content represents an insurmountable challenge to even the financially-capable companies in the world. Enormous amounts of money are invested into developing the tools to assist human moderators, not least to also minimise the amount of harmful content which individual human monitors are themselves exposed to personally – indeed, some online platforms are all too aware of the scope for inadvertently causing psychological harm to moderators who are asked to review the most objectionable material.

How AI can assist

The role of AI is likely to differ depending on content type. Nevertheless, it is likely to continue to play a vital role  in content moderation. For instance:

  • for text-based content:
  • machine learning models can be used to review contributions by looking at a vast combination of data points. On a basic level, reviews can be filtered by specific words or phrases, such as profanity or generally unhelpful language;
  • patterns of submissions can also be tracked across accounts and locations on site, to find users or locations which are promoting fake or harmful content;
  • subject to applicable data protection (and similar) laws, IP addresses and geolocation data can be used to track harmful content submission across devices and even websites. For instance, in the context of online reviews, this could be useful to identify whether it is a subject’s family member who is posting positive reviews for their restaurant, and negative reviews for their competitors.
  • for image and video content:
  • images, although more technically complex to assess using an algorithm or AI process, can be assessed in broadly similar way to text based submissions. Platforms can set up categories and filters for what is deemed acceptable and unacceptable;
  • visual recognition tools currently employed by platforms are exceptionally good at making detailed identifications of objects or activities, such as the make or model of weapons, or facial expressions and body positions suggestive of violence; and
  • the audio accompanying a video can be parsed by software tools to recognise gunfire, or non-speech noises indicative of violence and other prohibited content.

The limitations of AI and the continued role of human intervention

Despite this, it is commonly accepted that current algorithms are far from perfect. False positives and negatives are still commonplace. Certain harmful content requires an understanding of the context around it to determine whether it is a breach of platform standards. For instance, language may be used between adults which would not commonly be regarded as harmful, however when used in the context of online bullying of a child, the same words can have a very different effect. Moderation of this content by AI systems requires an understanding of the context and history of interactions between users, allowing innocuous and harmful content to be distinguished.

Similarly, not all violent content is created equal. A video of the recent protests in the USA, or Belarus, although containing violence, might be something that a platform or society as a whole wishes to publicise. There is a balance to be found between gratuity and documentary evidence.

Dealing with these issues is not outside the scope of AI moderation tools, and it would likely be simplistic to suggest that the “binary” nature of an AI tool’s decision making cannot be trained or tailored to be sophisticated enough to respond to contextually complex scenarios. Moderation tools can even be taught to identify the different types of harmful content which a platform might need to deal with, for instance, fake positive reviews against fake negative reviews, or to identify content which might be an ad, despite being uploaded as a standard post or review.

Contemporary moderation and analysis tools are extraordinarily robust. It has been reported that some technology companies now have tools powerful enough to automatically scan all live streams on all major social media platforms in real time as they are broadcast. This enormous amount of content can be filtered and assessed as it is uploaded, and automatically flagged as potentially harmful. This does not necessarily mean that the role of human moderators will become a thing of the past. Indeed, in Germany, the NetzDG law implies a right for a user’s content to be subject to human decision-making, at least as part of an appeal/counter-notice process. And in EU data protection law, there are potential pitfalls in terms of making entirely automated decisions where personal data is involved; any such processing brings up issues around the lawful basis for processing the data and transparency. Nevertheless, having machine-led functions to carry out a broad initial analysis, which can then potentially be backed-up by human moderation, seems like a necessity for larger platforms.

As the expectations of lawmakers increase, it is likely that platforms will continue to make exerted efforts to introduce and refine new technologies to assist them in eliminating harmful content from their platforms at the earliest opportunity.

< Back to top

 


 The boundaries of platform liability

The imposition of responsibilities on platforms to take more responsibility for preventing online harms brings up some interesting considerations about whether actively monitoring for harmful content in discharge of the platform’s duty of care could impact the ability of the platform to rely on a hosting defence.

The fundamentals of the hosting defence

In Europe, Article 14 of the E-Commerce Directive (2000/31/EC) allows information society services (which would include most online platforms) to avoid liability for unlawful activity or information on their site where (a) they do not have actual knowledge of unlawful activity or content and, where a claim for damages is concerned , they are not aware of facts or circumstances from which it would have been apparent that the activity or information was unlawful and (b) upon obtaining actual knowledge or awareness of the unlawful activity or content, they act expeditiously to remove or to disable access to it. This hosting defence is relied upon heavily by platforms which give users the ability to post user-generated content. More often than not, robust notice and takedown mechanisms in place to ensure unlawful content can easily be reported by users, investigated by the platform and taken down if deemed to be illegal or against the platform’s own acceptable use policies or community standards.

The interplay of the hosting defence and online harms responsibilities

We should make it clear that most of the proposals that we have referred to in this report do not automatically remove or override a hosting defence. Indeed, there is no reason why increased duties on platforms to remove and report on harmful content cannot sit alongside their existing notice and takedown procedures, without platforms incurring any further liability. For example, the hosting defence is widely regarded as still being fully available to video sharing platforms which implement measures to meet the requirements of the EU’s Audiovisual Media Services Directive. Still, it is a natural consequence that, in putting in place processes to better identify and remove harmful content in order to meet the higher duty of care imposed as a result of the various online harms proposals, platforms may be more likely to obtain knowledge about harmful content as a result of these processes. For instance, if a piece of harmful content is brought to the attention of a specific content reviewer as part of a vetting process (whether pre-distribution or as a result of a user complaint), the platform will be deemed to have knowledge of the content and, depending on the nature of the content and the nature of the complaint (where relevant), knowledge that the content is unlawful. Regardless of whether the content is deemed to fall foul of the platform’s own policies for preventing online harms, if a platform does not take it down promptly, it risks being held liable for it without the ability to rely on the hosting defence.

Of course, this issue is not new for online platforms, which have had to remain all too aware that once content is specifically moderated, the risk of losing the hosting defence increases if takedown is not quick. Nevertheless, if platforms start putting additional measures in place to comply with various online harms proposals – noting that some proposals cover non-illegal, but nevertheless harmful content as well as clearly illegal content – they should bear in mind the risk of inadvertently obtaining knowledge or awareness of unlawful content which is put on their radar through their reporting tools, and therefore the risk of losing the hosting defence if this content is not promptly removed. Furthermore, in order to comply with the requirements in certain countries, there may now ben stricter time limits in which this take-down has to take place – for instance within 24 hours of receiving notice of illegal content under Germany’s existing NetzDG laws and again within 24 hours of receiving a court order or government notification in India.

Distinctions between the US and EU approach

In the United States, section 230 of the Communication Decency Act – the US equivalent of the hosting defence for non-copyright related infringements – states that platforms will not be held liable for any action taken in good faith the restrict access to or availability of material that the platform considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable. This is often referred to the “Good Samaritan” rule, and it confirms that intermediaries, such as platforms, which take the initiative to try to block objectionable material on their service, for instance by having in place systems to check for such content, will not lose their immunity. But in Europe, there appears to be more of a focus on the passivity of the platform, and the lack of knowledge that comes from such passivity, in order to be able to rely on the hosting defence, with Recital 42 of the E-Commerce Directive making it clear that liability exemptions will only persist where the activities taken by the only cover activities having “mere technical, automatic and passive nature”.

The role of pre-emptive scanning

It also seems likely that platforms will be expected to ramp up their content scanning processes in order to pre-emptively identify and remove harmful content as part of the various online harms proposals. For many of the big players, such as social media platforms, this is not a new concept. Even though Article 15 of the E-Commerce Directive specifically prohibits Member States from imposing on information society services either a general obligation to monitor the information which they transmit or store, or a general obligation actively to seek facts or circumstances indicating illegal activity – meaning that platforms have generally not been expressly obliged to proactively seek out illegal content shared or distributed via their services – most major social media and content sharing platforms have in place automatic scanning functionality to search for an remove obviously illegal or harmful content. Such technology tends to use automated tools which scan for key words or IP breaches and often relies on an element of artificial intelligence to grow its decision making capabilities in order to more accurately determine what constitutes unlawful content.

Ultimately, there are commercial and reputational benefits to be had by platforms in ensuring that they provide a safe, friendly and honest place for users to interact. Generally speaking, platforms have not been deemed to have knowledge of content, such that they cannot rely on a hosting defence, where high-level automated content scanning is carried out; as mentioned above, Recital 42 of the E-Commerce Directive implies that knowledge or control of information transmitted via a service will not arise where an activity by the platform is “of a mere technical, automatic and passive nature”. We expect platforms to continue with this approach as they build out more sophisticated mechanisms for proactively identifying and removing content which falls within the remit of the various online harms proposals worldwide. Indeed, if the proposals in India are adopted, platforms operating there will be required by law to proactively monitor for unlawful content, which itself could be an onerous undertaking. In doing so, platforms should keep in the back of their minds whether the structure of a vetting or scanning process may end up being more than just passive and automatic, such that it limits the ability to rely on a hosting defence in a certain country.

Erosion of the hosting defence in Europe?

There has been some case law, particularly in Europe, which has arguably eroded the scope of the hosting defence afforded to platforms. For instance, in the Delfi AS v Estonia (64569/09) judgment by the European Court of Human Rights, a news website was held liable for failing to remove readers’ online comments on its articles which contained messages of hatred. It is likely that the online harms proposals discussed in this report will reinforce the position that websites allowing user content must take their responsibility to prevent harmful content seriously. In the Delfi case, the harmful comments made it through the website’s initial vetting procedure, and then the website delayed in taking them down. It may well be that the current proposals would also deem this kind of behaviour as a failure to prevent online harm, particularly if the website’s failure to remove the harmful comments contravene the website’s own acceptable use policies. However, it should be noted that in the Delfi case, the news website was deemed to be a publisher rather than an intermediary; the user comments were deemed by the courts to be integrated into the journalistic content of the site, particularly as the news portal had an economic interest in providing user commentary alongside its articles. Because the website was not deemed to be an intermediary, it could not rely on the hosting defence. Therefore, even if online harms proposals at European level will require platforms to take appropriate measures to prevent and remove harmful content, many platforms would hope to be able to position themselves as an intermediary. In doing so, they should still be able to rely on the hosting defence where they become aware of harmful content, as long as the content is moderated in accordance with their own policies and, particularly where it is unlawful, is removed expeditiously.

Nevertheless, at EU level, the hosting defence is deemed to be in need of an update to face the realities of how platforms operate. In particular, the European Commission has notices that in the twenty years since the E-Commerce Directive was adopted, there has been a constant evolution in the ways that websites and platforms allow users to communicate, access information and shop. The European Commission’s Digital Service Act package, briefly described in the Online Harms Legislation and Proposals Worldwide section of this report, is designed to deal with the fact that these developments have the potential to risk exposing users to illegal goods, activities or content. The package is likely to lead to a shaking up of the hosting defence, although it remains to be seen how this may look following the Commission’s public consultation. In terms of online harms, it seems likely that the EU will take into account how domestic legislation and proposals are intending to deal with harmful content, and it may well be that the hosting defence proposals dovetail more clearly with these proposals, in order to expressly set out how and when the hosting defence can be relied upon by platforms in the context of their efforts to tackle online harms.

< Back to top