1 Introduction

In a(n overly) simplistic world view, content moderation is a process with binary outcomes: those that are “right” and those that are “wrong”. And, in a copyright content moderation context, this binary nature, in principle, holds true: the ultimate question is simply, “Is this content illegal?”Footnote 1 Only two outcomes are possible: either content infringes copyright or it does not. Some content moderation decisions are, naturally, more straightforward than others: for example, in instances where content is – in the words of the European Commission – “manifestly illegal”.Footnote 2 Others, meanwhile, require more robust assessment, whether by domain experts, or even the courts. In either case, however, the “quality” of content moderation results can be assessed objectively, according to a binary standard.

Content moderation has become increasingly automated, and the reliance on algorithms is not only here to stay but also likely to continue to increase. For example, YouTube’s Content ID system, a fingerprinting system to automatically identify protected content, was introduced back in 2007. The lawmaker, too, is interested in these technological advances: in 2018, for example, the European Commission “encouraged” online platforms to take proactive measures to detect (any type of) illegal content, including by automated means.Footnote 3 The Digital Services Act (DSA)Footnote 4 has built upon this trend, the best example of which is the inclusion of a “Good Samaritan” clause in Art. 7 DSA.Footnote 5 Indeed, algorithmic (copyright) content moderation is arguably today’s most prominent use case of automated (micro) legal decision-making, by sheer volume.Footnote 6 In a manner of speaking, content moderation is legal decision-making on steroids.

The Digital Services Act is the first European framework to provide a legal definition of “content moderation”.Footnote 7 In essence, it encompasses a large variety of activities that address content that is illegal or that the platform concerned deems incompatible with its private contractual framework in the form of terms and conditions.Footnote 8 It is also distinct from content recommendation, i.e. the algorithmic selection or prioritisation of information presented to a user,Footnote 9 which is not addressed in this article.

There has been much discussion of the multiple issues related to large-scale automation both within copyright and beyond.Footnote 10 In this context, “false positives” and “false negatives” are regularly used as examples in discourses on the law and policy. This article aims to conceptualise these notions and explore the implications. I argue that we need to increasingly focus on decision quality in (copyright) content moderation. First, I propose a benchmark for decision quality. Then, I explore the various mechanisms introduced in the context of Art. 17 of the Directive on copyright in the Digital Single MarketFootnote 11 (CDSM Directive) and the horizontal Digital Services Act, with a view to pinpointing exactly where in the decision-making process mitigation mechanisms come into play. Finally, I consider to what extent these safeguards or mitigation mechanisms influence decision quality.

2 What Is the Benchmark for Decision Quality in Copyright Content Moderation?

For the purposes of analysis, it is assumed here – perhaps generously – that the quality of decisions is commensurate with their fidelity with the law, that is to say, as explained in the introduction, that a “good” content moderation decision is one that (through automated or manual content moderation) applies the law correctly, while a “bad” decision is one that applies the law incorrectly.Footnote 12

This assumption requires some explanation, and indeed justification. There are several “layers” of legal rules that are potentially relevant for a content moderation decision. First, and perhaps most obviously, are the substantive legal rules laid down in statute by legislators and adjudicated by the judiciary, i.e. the copyright acquis in this context. A second “layer” is formed by the private inter partes rules based on contracts, typically in the form of terms and conditions, or community guidelines. Whilst these rules are, in principle, entered into by mutual agreement, in reality the platform has significant power (both legally and practically) to lay down its own contractual rules for what is lawful and what is not. A third “layer” can also be discerned, namely law directly related to content moderation or intermediary liability that regulates platforms’ conduct to such ends, e.g. the Digital Services Act or the CDSM Directive. There are potentially other modes of regulation that one might reasonably conceive of as “law”, such as soft-law recommendations and guidelines. However, these, along with users’ normative perceptions of content moderation rules (broadly conceived), are outside the scope of the present analysis.

As elegantly put by Train, “[r]egulation in the real world is far from optimal, and it is perhaps unrealistic to believe that it ever will be”.Footnote 13 Nonetheless, for the purposes of analysis, let me set out the following assumption for a basic analytical model: let us assume that the European copyright framework regarding substantive rights represents the starting point of “optimal” regulation.Footnote 14 The intermediary liability (exemption) framework is then a second layer, which can be adjusted for the changing behaviour on the part of intermediaries/platforms, either by adjusting the substantive conditions for exemption from liability themselves, or by adjusting the procedural rules around that exemption.

Applied to the context of copyright content moderationFootnote 15 by online platforms and the impact on access to culture, this means the following. The “quality” of copyright content moderation correlates with access to culture, because access to culture is considered to be embedded in the existing copyright framework.Footnote 16 Since the existing framework is assumed to strike the appropriate balance between exclusivity in copyright protection and access to culture, any variation in that balance – beyond the margin of interpretation allowed by law – will impact on such access. Consequently, both excessive and insufficient content moderation will have a negative impact on access to culture. Simply put, excessive content moderation by platforms restricts such access.Footnote 17 Conversely, insufficient content moderation increases access to culture in the short run, but in a harmful way, because it encroaches on the legitimate interest of copyright holders and thus distorts the optimal balance, concomitantly harming access to culture in the long run. In other words: the smaller the difference between actual content moderation performed by intermediaries and correct application of the legal framework, the lesser the negative impact on access to culture.

The situation becomes trickier when private regulation is involved. Currently, online platforms enjoy, in principle, wide contractual freedomFootnote 18 and may, for example, voluntarily go “beyond” what is required by statutory law. A popular non-copyright example of this is Instagram’s policy on nudity and the differential treatment of male and female nipples.Footnote 19 Terms and conditions (i.e. the “contract” between user and platform) may also in certain instances – where permitted – deviate from the fallback substantive rules in the EU acquis. However, as private regulation is permitted under the democratically legitimised legal framework, let us assume here too, for the sake of argument, that such private regulation is itself an eligible benchmark for content moderation decision quality.Footnote 20

3 Quality and Errors

In any case, the “quality” of content moderation can be described very simply in terms of correct and false results. For simplicity, let us differentiate in the following between illegal content (i.e. copyright-infringing) and legal content (i.e. not copyright-infringing).Footnote 21

The following attempt to describe outcomes is borrowed from statistics and popularly referred to in the context of content moderation.Footnote 22 There are four theoretical outcomes that need to be distinguished, as shown in Fig. 1. The following focusses on making illegal content unavailable without going into technical details or considering other measures such as demotion, demonetisation or measures related to the user’s account.Footnote 23

Fig. 1
figure 1

Error types in copyright content moderation

3.1 No Errors

The first set of outcomes relates to a correct result of content moderation (i.e. the absence of error): if illegal content is taken down, there is no error in the moderation (true positive). The following example illustrates this scenario: a musical work that infringes copyright (i.e. is not covered by any limitation or exception and is not in the public domain) and that a user has uploaded in its entirety to the platform is identified and removed by said platform’s content moderation tools and practices. The platform’s content moderation achieves the correct result.

Similarly, if legal content is not taken down, there is no error present in the moderation (true negative). The following example illustrates this scenario: the upload of a copyright-protected work that is covered by a limitation or exception and thus is not copyright-infringing. The uploaded content is consequently not identified and reacted to by said platform’s content moderation tools and practices. In this scenario too, the platform’s content moderation achieves the correct result.

This set of outcomes (true positive and true negative) represents the optimal state of content moderation based on the above benchmark for decision quality. Since this benchmark rests upon the framework de lege lata (i.e. what is permitted under copyright law and what is not), it does not necessarily represent the optimal state of regulation (i.e. how the rules and private policies should be).

3.2 Error

The second set of outcomes relates to false results of content moderation, i.e. errors.Footnote 24 There is an error, firstly, where legal (i.e. non-infringing) content is taken down. This is also referred to as a false positive (or type-I error). The following examples illustrate this scenario. First, a (copyrightable) work that is in the public domain but is (falsely) identified as copyright-protected and taken down. A second, far trickier, copyright-specific example relates to a copyright-protected work whose use is permitted because it falls under a limitation or exemption.Footnote 25 In the context of online content-sharing service providers, Art. 17(7) CDSM Directive introduces a specific mandatory regime for selective limitations and exceptions, namely: (i) quotation, criticism, review; and (ii) use for the purpose of caricature, parody or pastiche.Footnote 26 These limitations and exceptions require – to varying degrees – a contextual analysis,Footnote 27 profound knowledge of national and EU copyright law, and sometimes the involvement of the CJEU, e.g. in the case of parody.Footnote 28

A second type of error is present in instances where illegal content is not taken down. This is also referred to as a false negative (or type-II error). An example of this would be the unlicensed use of a copyright-protected work where the platform’s content moderation fails to detect the copyright-infringing material.

3.3 The “Upstream” Question of the “Right” Balance

Based on the above assumption, this implies firstly that any moderation by intermediaries that produces type-I errors (false positives) or type-II errors (false negatives) is assumed to have a negative impact on access to culture.Footnote 29 This suggests, in other words, that pure availability of content is not always optimal for access to culture, at least in a broader copyright context.Footnote 30 Secondly, then, true positives and true negatives are not detrimental to access to culture: their impact is neutral.Footnote 31 However, there are several drawbacks to this simplified analysis. Chief among them is the fact that no consideration is given to the normative aspects of copyright law.Footnote 32 Furthermore, this model does not account for the uncertainty associated with the margin of discretion that platforms may have when designing their content moderation terms and conditions or other means such as licensing. This normative “upstream” dimension of access to culture challenges the assumption on which this present model is based and considers the copyright balance struck in the existing legal framework as such. However, that is outside the scope of this article.

4 How Much Error Is Acceptable De Lege Lata?

Based on the above, the principal question now should be: what error rate is acceptable under the legislative framework (and thus to society)? This question relating to the quality of content moderation – or the question of error – is, partly implicitly, addressed in several dimensions in the European legislative framework and jurisprudence.Footnote 33

As a horizontal (i.e. not copyright-specific) framework, the Digital Services Act will apply to all intermediary service providers from 17 February 2024 and has applied to very large online platforms (VLOPs) since 25 August 2023.Footnote 34 It explicitly addresses content moderation error rates in several provisions.

In relation to voluntary measures by online platformsFootnote 35 to ensure that illegal (in our context: copyright-infringingFootnote 36) content remains unavailable, recital 26 DSA states that automation technology must be “sufficiently reliable to limit to the maximum extent possible the rate of errors”. The recital refrains from further specifying what would be considered “sufficiently reliable” but note the superlative in relation to the limitation of error. At first glance, both parameters (i.e. “sufficiently” and “maximum”) seem somewhat self-contradictory. In another content moderation context, Art. 17 of the CDSM Directive, explored in more detail below, for example, requires certain measures to be “in accordance with high industry standards of professional diligence”. Recital 26 DSA does not put forward a requirement of high(est) industry standard but, in lieu of other benchmarks, a high industry standard could be a relevant concept for understanding the “maximum extent possible”.

In yet another context, in relation to transparency reporting, Art. 15(1)(e) DSA obliges intermediary service providers,Footnote 37 when submitting transparency report, to include information on “any use made of automated means for the purpose of content moderation, including a qualitative description, a specification of the precise purposes, indicators of the accuracy and the possible rate of error of the automated means used in fulfilling those purposes, and any safeguards applied”.Footnote 38 How this possible rate of error should be calculated is not specified in any further detail.Footnote 39 Furthermore, it can be assumed that, once a platform discovers erroneous decisions when preparing its reports, such errors would likely be corrected ex post. In any case, the number of complaints received through an internal complaint-handling system (cf. Art. 15(1)(d) DSA) can only provide one starting point, since it is unlikely that all erroneous content moderation decisions will lead to such a complaint in the first place.Footnote 40 Both examples underline the crucial role of error in content moderation. However, they also imply that error rates cannot (and need not) be equal to zero. Importantly, the DSA does not differentiate between type-I (false-positive) or type-II (false-negative) errors.

Another starting point in the DSA relates to content moderation activities in the light of fundamental rights.Footnote 41 According to Art. 14(1) DSA, all intermediary service providers, including online platforms, are required to inform users in their terms and conditions of inter alia “measures and tools used for the purpose of content moderation, including algorithmic decision-making and human review”.Footnote 42 With regard to errors, Art. 14(4) DSA is of special interest. It obliges intermediary service providers, when applying and enforcing their terms and conditions, to “act in a diligent, objective and proportionate manner” and “with due regard to the rights and legitimate interests of all parties involved, including the fundamental rights of the recipients of the service, such as the freedom of expression, freedom and pluralism of the media, and other fundamental rights and freedoms as enshrined in the Charter”. Its exact extent remains vagueFootnote 43 but a connection between error rates and fundamental rights is implied that would have to be accounted for in the context of any assessment under Art. 14 DSA.

Finally, the Digital Services Act contains specific additional obligations for VLOPs such as YouTube, TikTok and Instagram, i.e. those that have more than 45 million active users and can become relevant pointers for content moderation decision quality.Footnote 44 Under certain circumstances, VLOPs will be required to perform risk assessments and put in place means of risk mitigation:Footnote 45 Under Art. 35(1) DSA, VLOPs would be required to mitigate risk by, e.g. “adapting content moderation processes, including the speed and quality of processing notices related to specific types of illegal content and, where appropriate, the expeditious removal of, or the disabling of access to, the content notified, in particular in respect of illegal hate speech or cyber violence, as well as adapting any relevant decision-making processes and dedicated resources for content moderation”.Footnote 46 As can be seen, copyright is not mentioned as a specific area of focus. However, this does not mean that such adaption of content moderation would not be relevant in a copyright context.Footnote 47

The copyright-specific framework for online content-sharing service providers (OCSSPs) in Art. 17 CDSM Directive, too, can be understood as addressing decision quality. OCSSPs, defined in Art. 2(6) CDSM Directive (with further guidance in recitals 62 and 63), can be understood as a specific subset of online platforms in the DSA, whose main purpose is to store and give the public access to a large amount of copyright-protected content that has been uploaded by its users and that it organises and promotes for profit-making purposes.Footnote 48 With regard to content moderation by OCSSPs like YouTube, Twitter, Pornhub or similar, Art. 17 CDSM Directive appears to contain a stunningly clear and equally surprising answer provided by the European legislator: taking the text of that Article at face value, it seems that the acceptable margin of error is very close to zero.

Firstly, as noted above, Art. 17(4)(b) CDSM Directive requires OCSSPs to make their best effort to ensure that specific works remain unavailable in accordance with high industry standards of professional diligence.Footnote 49 Article 17(7) CDSM Directive then states that the cooperation between OCSSPs and rightholders “shall not result in the prevention of the availability of works or other subject matter uploaded by users, which do not infringe copyright and related rights, including where such works or other subject matter are covered by an exception or limitation”. Since this cooperation directly affects a platform’s content moderation practices, the question is what standard exactly is set by the requirement that lawful uses of copyright-protected works may not be prevented. Read alone, Art. 17(7) CDSM Directive could be interpreted in a way that merely prohibits systematic over-enforcement. However, read in conjunction with the third paragraph of Art. 17(9) of the CDSM Directive,Footnote 50 it seems that the standard might be stricter than that: this provision notes in a more straightforward fashion that the Directive “shall in no way affect legitimate uses, such as uses under exceptions or limitations provided for in Union law […]”.Footnote 51 Since the first and second paragraphs of Art. 17(9) of the CDSM Directive deal merely with complaint and redress mechanisms, it could be argued that the third paragraph also relates only to the ex post mitigation of errors. At the same time, the very wording of the third paragraph (“[t]his Directive […]”) points to a more holistic standard. This reading is also supported by its second requirement that it “shall not lead to any identification of individual users nor to the processing of personal data” unless in compliance with the General Data Protection Regulation (GDPR).Footnote 52 A narrow interpretation of the third paragraph would entail that such data protection considerations would only be deemed worth re-iterating by the lawmaker in the context of redress mechanisms but not in the context of the content moderation decision in the first place. Similarly, the fourth paragraph of Art. 17(9) of the CDSM Directive contains an information requirement for OCSSPs’ terms and conditions, which similarly is not related to ex post scenarios.

Furthermore, as mentioned above, the provision in Art. 17(7) CDSM Directive also harmonises the mandatory limitations and exceptions for quotation, criticism, and review, as well as use for the purpose of caricature, parody or pastiche.Footnote 53 Since this is not feasible in practice – whether or not automation is involved – redress mechanisms are put in place to mitigate errors. However, in this context, the European Commission’s Guidance on Art. 17 notes that “to restore legitimate content ex post […] once it has been removed or disabled” would “not be enough for the transposition and application of Art. 17(7)”.Footnote 54 Therefore, the Commission’s Guidance continues to argue, “automated blocking, i.e. preventing the upload by the use of technology, should in principle be limited to manifestly infringing uploads”.Footnote 55

The question of copyright content moderation quality and error in the context of this provision was also touched upon by Advocate General (AG) Øe in his opinion in case C-401/19, Poland v Parliament and Council. On the one hand, the AG notes that it follows from Art. 17(7) CDSM Directive that OCSSPs are prohibited from removing copyright-protected content that is covered by a limitation or exception “on the ground that that content infringes copyright”Footnote 56 and concomitantly limits OCSSPs’ freedom to conduct a business in order to ensure freedom of expression for users. On the other hand, Øe also points out that Art. 17(7) CDSM Directive “does not mean that the mechanisms which lead to a negligible number of cases of “false positives” are automatically contrary to that provision”.Footnote 57 Yet, the AG notes that error rates “should be as low as possible”.Footnote 58 Therefore, AG Øe argues that, in situations where the current technological state of the art for automatic filtering tools is not sufficiently advanced to prevent a significant false-positive rate, the use of such a tool should be precluded.Footnote 59 This interpretation by the AG is noteworthy since “as low as possible” could indicate a more lenient standard than the Directive’s standard of affecting legitimate uses “in no way”. Thus, Art. 17 CDSM Directive contains indicators as to the acceptable error rate for both false negatives and false positives. However, it is noteworthy that, according to AG Øe, over-blocking – i.e. a higher false-positive rate – may be justified in certain cases, given CJEU case-law and in the light of the “effectiveness of the protection of the rights of rightholders”.Footnote 60 Thus, in copyright content moderation by OCSSPs, the acceptable error rate for false positives does not necessarily correspond with that for false negatives.

Even though the CJEU, in its judgment of 26 April 2022, did not adopt the AG’s specific reflections on error rates, it too underlines the prescription of a specific result in the “unambiguous wording” in Art. 17(7).Footnote 61

In its assessment of proportionality, the Court recalls that “legislation which entails an interference with fundamental rights must lay down clear and precise rules governing the scope and application of the measure in question and imposing minimum safeguards, so that the persons whose exercise of those rights is limited have sufficient guarantees to protect them effectively against the risk of abuse”.Footnote 62 Even more relevant, the Court holds that the necessity of (such) safeguards is “all the greater where the interference stems from an automated process”, which had previously been established in cases related to data protection law that involved automation.Footnote 63 In other words, the mere fact that content moderation is automated by algorithmic means translates to higher safeguarding requirements. By transferring this argumentation to a context of (copyright) content moderation (and freedom of expression and information, Art. 11 of the Charter), the Court not only lifts this to a more horizontal standard, but thereby arguably and incidentally also appears to consider decision quality to be a crucial matter from a regulatory perspective.

In a somewhat enigmatic fashion and on the basis of previous case-law, the CJEU interprets the prohibition on general monitoring in Art. 15(1) of the e-Commerce Directive,Footnote 64 and the “similar” clarification in Art. 17(8) CDSM Directive that there must not be a general monitoring obligation, as providing “an additional safeguard for ensuring that the right to freedom of expression and information of users of online content-sharing services is observed”.Footnote 65 The Court goes on to interpret this as meaning that OCSSPs therefore “cannot be required to prevent the uploading and making available to the public of content which, in order to be found unlawful, would require an independent assessment of the content by them in the light of the information provided by the rightholders and of any exceptions and limitations to copyright”.Footnote 66 In other words, as in previous cases involving content relating to copyright and hate speech,Footnote 67 the Court deems that only in the context of Art. 17 CDSM Directive is there discretion for automated content moderation, where no detailed legal examination is necessary.Footnote 68 The Art. 17 mechanism “cannot […] lead to […] taking measures which would affect the essence of that fundamental right of users who share content on their platforms which does not infringe copyright and related rights”.Footnote 69 However, this is a seemingly difficult standard to reconcile in the context of Art. 17(7) CDSM Directive and the prescription of a specific result, which the Court seems to avoid in its decision: the assessment of copyright limitations and exceptions such as quotation or parody may per default require an independent assessment of the content. And, as discussed above, this legal assessment may not be a trivial one. In other words, the Court avoids taking a stance on the grey zone of erroneous decisions by implying restricted room for automated content moderation. However, as the CJEU also puts emphasis on the procedural safeguards contained in inter alia the assessment of proportionality in Art. 17(9) CDSM Directive, it appears to acknowledge the need for ex post redress, which means that the issue is not as clear-cut as it may seem.

The matter is complicated by the fact that the general monitoring prohibition referred to above relates only to the imposition of a duty. In other words, OCSSPs are free – and possibly even encouraged by Art. 7 DSA – to voluntarily, “in good faith and in a diligent manner, […] take other measures aimed at detecting, identifying and removing, or disabling access to, illegal content”.Footnote 70 Would the mandatory exceptions and limitations of the Art. 17 regime apply then? Would they inform the OCSSP’s assessment of terms and conditions under Art. 14(4) DSA, as discussed above?

5 Does the Error Rate Say Anything About Decision Quality?

In any case, for providing meaningful information on decision quality, the issue of error rates in all the examples above must take the form of a concrete contextual analysis.

First, it is necessary to have information available on the basis on which error rates are calculated, i.e. how many errors are present. As noted above, the number of complaints received through an internal complaint-handling system (cf. Art. 15(1)(d) DSA) can provide only one starting point, since it is unlikely that all wrong content moderation decisions lead to a complaint. There may be instances where users decide not to appeal a decision.Footnote 71 The question then is what other aspects could be taken into consideration for identifying errors. One possibility could be to work with an estimate of what percentage of (wrong) decisions will not be overturned, and to perform random (statistically significant) samples where the legality of content is assessed.

Second, whereas the mere percentage of errors in content moderation might provide a first insight into how precise moderation activities are, it is by default a superficial metric. In the example of large-scale content moderation, a low percentage of errors (low error rate) would still constitute a high number of actual “wrong” content moderation decisions. Consider the following: if bots were to post millions of manifestly infringing works on a platform, which were then automatically removed, the error rate for uploads by users that constitute transformative uses would be lower than for a platform where fewer manifestly infringing works were posted. A second factor should relate to the actual volume of content moderation decisions taken.

It is also questionable whether the error rate is one-size-fits-all or whether there are varying acceptable error rates. A third factor should therefore relate to the “harm” caused by the error, i.e. the wrong content moderation decision, and whether and to what extent such harm can be mitigated ex post. In the case of hate speech or child sexual abuse material, for example, it might be societally more acceptable to have a higher false-positive rate (over-removal), simply because of the seriousness of the potential infringement and the difficulty of mitigating harm. The negative effect of a delayed posting due to pre-flagging might be relatively small. On the other hand, where economic rights are concerned (including copyright-protected content), consideration should be given to the fact that economic damage can be remedied.Footnote 72 Thus, it could be assumed that the acceptable error rate is lower in the case of fundamental rights.

6 Conclusions

This present exercise of oversimplification underlines the importance of analysing and differentiating the different strategies for mitigating issues relating to automated (copyright) content moderation. The simplified analysis introduced above allows us to compartmentalise the specific issues of copyright content moderation by online platforms: following this approach, the focus is consequently on the “downstream” issue of mitigating (type-I and type-II) errors in content moderation. In this downstream mitigation, both ex ante obligations as well as ex post procedural redress mechanisms are relevant. As mentioned above, this paper does not address the “upstream” question of the “right” balance,Footnote 73 e.g. by introducing or interpreting a broad liability exemption and thereby minimising platforms’ incentives to over-enforce.Footnote 74 The former would increase decision quality because of decreased legal complexity and concomitantly fewer legal grey zones.

In other words, this approach simply took the existing framework as its starting point; the more perfect automatic enforcement, the better. But it is (and will remain) unrealistic to achieve error-free content moderation, even if more reliable technology becomes available. We only have to consider the complexity of copyright limitations and exceptions and the rich body of case-law at national and EU level. However, what does become clear is that ex post mechanisms, such as redress mechanisms, and transparency mechanisms cannot have a direct effect on errors in the initial content moderation decision. Concomitantly, not all mechanisms are equally fit for minimising error (and improving the quality of decision-making).

A further simplification of this perspective lies in the fact that content moderation consists of activities that go beyond takedown, namely measures that “affect the availability, visibility, and accessibility of that illegal content or that information, such as demotion, demonetisation, disabling of access to, or removal thereof, or that affect the ability of the recipients of the service to provide that information, such as the termination or suspension of a recipient’s account”.Footnote 75 However, in the context of copyright infringement, takedown may be the only appropriate response to the exclusive right of rightholders. To minimise error though, a mix of moderation techniques might be able to strike a more appropriate balance.Footnote 76 If the legal status of an uploaded work after automated assessment is uncertain, its visibility could for example be lowered until a final (expert) decision is taken; thus, the legal risk is better distributed between parties. Furthermore, the choice is not necessarily between full automation and manual moderation. The importance of human involvement in the automated content moderation process (and not merely ex post) is exemplified by the case of YouTube, which reduced its workforce in response to the COVID-19 pandemic. According to YouTube, this reduction in human reviewers has meant that it removes “more content that may not be violative of our policies”.Footnote 77

The CJEU clearly notes that the need for safeguards is all the greater where automated content moderation processes are at play.Footnote 78 Arguably, the quality of decision making should be the focus when regulating how copyright-protected material is enforced. With regard to copyright content, it has been argued that Art. 17 CDSM Directive has tipped the balance in favour of rightholders and that OCSSPs may be incentivised to over-enforce.Footnote 79 Also, outside the copyright-specific regime of Art. 17 CDSM Directive, the provisions of the Digital Services Act further incentivise online platforms (and in fact all intermediary service providersFootnote 80) to conduct voluntary own-initiative investigations including to “take other measures aimed at detecting, identifying and removing, or disabling access to, illegal content” (Art. 7 DSA).Footnote 81Ex post mitigations (redress mechanisms) and transparency do not reduce errors or improve decision quality on their own, but merely mitigate the effects of erroneous decisions, which may vary from case to case. It seems that “users” bear the larger risk of low decision quality. In conclusion, therefore, there needs to be increased focus on decision quality as distinct from and in addition to any ex post mitigation mechanism.