Keywords

Introductory Comments

The mitigation of digital disinformation and its incentives has recently become a policy sphere in which national governments, including Australia’s, and supranational authorities have understandably become more active. Yet, to date, most solutions to the disinformation problem have been either consumer-led or non-binding ones. These approaches, which we now canvass, are not incompatible with TIPA laws and should constitute part of the broader policy architecture in the fight against false information. But as they currently work, they are not enough. Neither is defamation law adequate to the task, as we will argue.

The ACCC Digital Platforms Inquiry

In Australia, the Australian Competition and Consumer Commission’s (‘ACCC’) 2019 Digital Platforms Inquiry (‘DPI’) made notable inroads to enhancing our understanding of social media networks and the role they play in digital disinformation. While most of the Inquiry’s recommendations pertained to areas such as mergers and acquisitions law, advertising, regulatory harmonisation and ensuring the long-term sustainability of existing media structures, in its 15th recommendation, the DPI report recommended the implementation of an ‘industry code of conduct to govern the handling of complaints about disinformation’ (ACCC, 2019: 34).

The report suggested that the code be enforced by the Australian Communications and Media Authority (‘ACMA’) and that ACMA should be vested with the ability to gather information from signatories, impose sanctions to deter code breaches, provide public reports on the nature and scope of complaints and report annually to the government on the code’s efficacy and members’ compliance. It should be noted here that the ACCC recommended that the code only pertains to false information that causes ‘serious public detriment’ (ACCC, 2019: 22). While conceding that it is less of a threat in the Australian context, the report also indicated that ‘malinformation’ (information that is based in reality but whose sole purpose is to inflict harm) should be encompassed by the code.

While the term ‘serious public detriment’ was used multiple times throughout the DPI report, the term was never properly defined and its scope left indeterminate. The ACCC did, however, explicitly state that it would expect the code to encompass doctored and dubbed misrepresentations of public figures, incorrect information about time and location for voting in elections, and false allegations of a public individual’s engagement with illegal activity.

Examples of situations in which the code was not expected to apply were more revealing in terms of its proposed nature. Specifically, the ACCC did not see it as applying to false or misleading advertising. This exception is understandable in the commercial realm, as such false representations are already governed under Australian Consumer Law. The omission does, however, mean that the code does not apply to false and misleading political advertising. Further, the ACCC did not see the code as applying to commentary and analysis that is ‘clearly identified as having a partisan ideological or political slant’ (ACCC, 2019: 371), a striking oversight that leaves false partisan claims made in the public realm unregulated. Finally, and perhaps most tellingly, the ACCC did not see the code as applying to ‘incorrect or harmful statements made against private individuals’ (ACCC, 2019: 371)—citing existing protections under defamation law. As will be explored later in the chapter, the argument that all harmful statements against individuals (even those in public office) should be addressed via defamation law, has been and continues to be applied in arguments opposing TIPA provisions. Such reasoning was most notably cited in the Joint Select Committee on Electoral Reform (‘JSCER’) 1984 report which recommended that Australia’s short-lived federal TIPA provision be repealed.

The DIGI Code of Conduct

Later in 2019, in a response to recommendation 15 of the DPI report, the Federal Government committed to ‘ask[ing] the major platforms to develop a code (or codes) of conduct for disinformation and news quality’ (Australian Government, 2019: 7) and appointed ACMA to oversee the development of the code. In 2020 Digital Industry Group Incorporated (‘DIGI’), the peak association for the Australian digital industry, sought to establish a voluntary code of conduct (‘DIGI code’) for their members pursuant to the government’s request, but notably against the ACCCs recommendation that sanctions should be applied for breaches. The code was developed in consultation with its members, the University of Technology Sydney’s Centre for Media Transition, as well as the media monitoring firm First Draft, and was implemented in February 2021. The code is wholly voluntary and underpinned by seven key objectives and seven desired outcomes (DIGI, 2021).

The seven objectives are:

  1. 1.

    Provide safeguards against Harms that may arise from disinformation and misinformation;

  2. 2.

    Disrupt advertising and monetisation incentives for disinformation;

  3. 3.

    Work to ensure the integrity and security of services and products delivered by digital platforms;

  4. 4.

    Empower consumers to make better-informed choices of digital content;

  5. 5.

    Improve public awareness of the source of political advertising carried on digital platforms (emphasis added);

  6. 6.

    Strengthen public understanding of disinformation and misinformation through support of strategic speech; and

  7. 7.

    Signatories publicise the measures they take to combat disinformation and misinformation (see DIGI, 2021: 10–16).

The seven corresponding desired outcomes are:

  1. 1.

    a) Signatories contribute to reducing the risk of harms that may arise from the propagation of and potential exposure of users of digital platforms to disinformation and misinformation by adopting a range of scalable measures; b). Users will be informed about the types of behaviours and types of content that will be prohibited and/or managed by signatories under this code; c) users can report content of behaviours to signatories that violates their policies … through publicly available and accessible reporting tools; and d) Users will be able to access general information about signatories’ actions in response to reports made;

  2. 2.

    Advertising and/or monetisation incentives for disinformation are reduced;

  3. 3.

    The rise in inauthentic user behaviours that undermine the integrity and security of services and products is reduced;

  4. 4.

    Users are enabled to make more informed choices about the source of news and factual content accessed via digital platforms and are better equipped to identify misinformation;

  5. 5.

    Users are better informed about the source of political advertising (emphasis added);

  6. 6.

    Signatories support the efforts of independent researchers to improve public understanding of disinformation and misinformation; and

  7. 7.

    The public can access information about the measures signatories have taken to combat disinformation and misinformation (see DIGI, 2021: 10–16).

Each objective is buttressed by a number of sub-objectives that attempt to provide clear and actionable self-regulatory behaviour. Signatories currently include Twitter, Google, Facebook, Tiktok, Microsoft, Adobe, Redbubble and Apple—each of whom released their code commitments (objectives to which they have ‘opted-in’) and transparency reports (steps taken to address the relevant objectives) in May 2021.

All seven objectives are of importance in combating the false information epidemic, although objective 5 is especially relevant to combating misleading election advertising since it seeks to ‘[i]mprove public awareness of the source of Political Advertising carried on digital platforms’. The corresponding outcome is that ‘[u]sers are better informed about the source of political advertising’ (DIGI, 2021: 14). The objective contains three actionable items:

  1. 1.

    While Political Advertising is not Misinformation or Disinformation for the purposes of the Code, Signatories will develop and implement policies that provide users with greater transparency about the source of Political Advertising carried on digital platforms.

  2. 2.

    Measures developed and implemented in accordance with [objective 5] … may include requirements that advertisers identify and/or verify the source of Political Advertising carried on digital platforms; policies which prohibit advertising that misrepresents, deceives, or conceals material information about the advertiser or the origin of the advertisement; the provision of tools which enable users to understand whether a political ad has been targeted to them; and policies which require that Political Advertisements which appear in a medium containing news or editorial content are presented in such a way as to be readily recognisable as a paid-for communication.

  3. 3.

    Signatories may also, as a matter of policy, choose not to target advertisements based on the inferred political affiliations of a user (DIGI, 2021: 14).

The first two commitments are mere complements of the existing authorisation requirements for political advertisements. All political advertisements at the federal level are already required to inform the audience of their authoriser per the Commonwealth Electoral (Authorisation of Voter Communication) Determination 2021. This requirement extends to social media, video sharing and digital banner advertisements. The third outcome, per its unenforceability and stipulation that signatories ‘may’ choose not to target advertisements, is problematic. It is not clear why any platform with the ability to target voters of a certain political orientation, particularly when there is advertising revenue to be made, would comply with this outcome. The rationale here is probably related to the belief that voluntary codes can be important for building trust and mutual respect between governments and platform owners (Pamment, 2020). Guiding principles or industry-led standards can often have clauses of aspiration which, while unenforceable, create a framework for a conversation that may lead to the development of certain desirable goals. The objective of such a clause, with its opaque language, is not to secure compliance, but to achieve wider subscription in order to shape the conversation and help the industry move forward together. The effect of a provision such as this is to invite (rather than force) platforms away from doing as they please and to support industry principles and goals for the recognised long-term common good of the industry. Nevertheless, cl 5.23 would be unlikely to have any immediate effect on regulating the truth of political advertisements and the inability of the clause to be enforceable does highlight the need for legislative intervention.1

While, as expected, signatories subscribed to all commitments relevant to their operations, the voluntary nature of the Code presents some potential future challenges. In Section 6.1D, the code explicitly mentions that ‘[s]ignatories may take into consideration a variety of factors in assessing the appropriateness of measures including … whether the platform may receive a commercial benefit from the propagation of the content’ (DIGI, 2021: 16) (emphasis added). In other words, should commitments to preclude false information and its harms affect profits, firms are at liberty to abandon their commitments. This caveat does not appear to be congruent with the code’s second objective, that is, to reduce the monetising potential of false information. The two signatories with the most market power and whose activities are most relevant to the propagation of political false information—Google and Facebook—are large publicly traded companies whose first duty, under corporation law, is to their shareholders rather than the public interest; therefore there is little incentive for them comply and it might even be argued that it would be irrational for them to do so.

The DIGI code is certainly a step in the right direction in terms of regulating false information, but its tangible effects remain to be seen especially given its non-binding character and internal inconsistencies. In the Government’s response to the Digital Platforms Inquiry, it committed to evaluating the effectiveness of the voluntary code following its implementation, and to consider the need for further action if the voluntary measures are failing to ‘adequately mitigate the problems of disinformation’ (Australian Government, 2019: 13).

Similar codes have been developed in other jurisdictions such as India, Sweden and Canada, although the European Union Code of Practice on Disinformation (‘EU Code’) is generally considered to be the archetypal self-regulatory industry code. Indeed, when considering how an Australian counterpart code may look, both DIGI and the ACCC consulted the European Code as best practice in the policy area. The European Code covers five broad policy areas. These are:

  1. 1.

    Scrutiny of ad placement

  2. 2.

    Political advertising and issue-based advertising

  3. 3.

    Integrity of services

  4. 4.

    Empowering consumers

  5. 5.

    Empowering the research community.

The second policy area, as applicable to our focus here, distils into three commitments, namely: compliance with EU and national law regarding the presentation of paid advertisements; enabling public disclosure of political advertising; and devising means to publicly disclose ‘issue-based advertising’.

As with the Australian code, the largest and arguably most culpable platforms and technology companies are signatories to the EU Code of Practice as of 2020. The EU Code has been criticised since its 2018 commencement as lacking uniform definitions and procedures for implementation as well as firm-side transparency. The European Commission expressed its dismay at the ‘extent to which Facebook, Twitter, and Google failed to report success metrics for their efforts’, euphemistically commenting that progress had been hindered by the fact that ‘the different parties’ had ‘interests’ that were ‘divergent’ (Pamment, 2020). Further, information published through its self-reporting requirements cannot be accurately verified (ERGA, 2020). It is, however, important to note that the EU Code constitutes just part of the fledgling European regulatory apparatus to combat false information (European Commission, 2018). By virtue of the Australian code’s similarity to the European code, it appears to be vulnerable to similar criticism. At the time of writing (December 2021) the EU code is being strengthened due to suboptimal effectiveness and perceived weaknesses (Pamment, 2020).

While the problem of false information, particularly of a digital nature, is multifaceted and increasingly intractable, the DIGI Code is a welcome development in the sphere. Any attempt to mitigate digital information ‘pollution’, particularly at election time is always welcome but only if it is not being used as window-dressing for lack of action. It remains to be seen whether the code’s fifth objective (to ensure ‘users are better informed about the source of political advertising), as the pillar most relevant to the regulation of political advertising, will have any demonstrable effect on the character of political debate in Australia. The impending 2022 Australian federal election may prove a worthwhile test for the pillar’s effectiveness. The Australian Government’s recent conflict with Facebook and Google has shown that resisting the power of platforms has costs, as exemplified by the news ‘blackout’ following the passing of the ‘News Media Bargaining Code’ in February 2021 (see Treasury Laws Amendment [News Media and Digital Platforms Mandatory Bargaining Code] Act 2021 [Cth]). On the other hand, the eventual backing down of Facebook and Google in this dispute could strengthen the resolve of authorities.

Alternative Remedies

Although we have discussed the use of voluntary codes and propose a legal remedy below, it is worth mentioning here that there are other, non-regulatory, non-codified means by which we can seek to inoculate people against damaging false campaign statements. For example, we could encourage consumers to stop getting their ‘news’ exclusively from Facebook or we could step up campaigns to ensure that independent and reliable public interest news providers (like the ABC and SBS) are still active in the news market and, importantly, properly resourced. Alternatively, we could adopt the approach used by public health organisations when trying to counter false information about COVID-19 vaccination on social media; such organisations aim, not to change the opinions of the people posting it, but to reduce misperceptions among those consuming it. A study published in October 2020 by two American researchers, Emily Vraga and Leticia Bode, tested the effect of posting an infographic correction in response to false information about the science of a false COVID-19 prevention method. They found that a bot developed with the World Health Organization and Facebook was able to reduce misperceptions by posting factual responses to false information when it appeared (Vraga & Bode, 2020). Social media platforms can also address COVID-19 false information by simply removing or labelling posts and de-platforming users who post it, a method that could readily be translated to the electoral context.

Other methods are driven by consumers and civil society; for example, a recent experimental trial launched by Twitter in Australia, the United States and South Korea allows users to flag content they consider misleading in the same way that other harmful content is currently reported; usefully for the electoral context, there is an option to flag whether the post is related to ‘politics’ (Newman & Reynolds, 2021). There are also independent watchdogs like Digital Rights Watch and third-party fact-checking services like the RMIT/ABC Fact Check collaboration which verifies ‘the accuracy of claims by politicians, public figures, advocacy groups and institutions engaged in public debate’ (RMIT/ABC, 2020).

We could also regulate by other means: we could, for example, regulate media markets more robustly to prevent oligopolies and monopolies from excluding or crowding out other market participants. An additional quasi-legal—yet largely overlooked—means of regulating campaign speech is California’s Code of Fair Campaign Practice, which operates more as a ‘moral obligation’ for politicians throughout the campaign than as an enforceable statute (Cal ELEC Code, Division 20, Chapter 5, § 3). All prospective candidates in California are given a copy of the code to sign, although subscription to the code is voluntary. The Code purports to promote ‘open’, ‘sincere’ and ‘frank’ campaigning and to prohibit the use of defamation, libel or slander pertaining to a candidate’s personal life.

We could—and perhaps should—use all of the above methods but it is also worth considering a more decisive method that is able to send a clear message about the serious harms of false electoral information: legal regulation. Yet, according to some, the problem is already being legally regulated.

Are Defamation Laws Enough?

Some scholars have argued that the laws of defamation are sufficient to manage false campaign advertising (see JSCER, 1984), but this is controvertible. The effects of false campaign statements extend far beyond the scope of defamation law, which only addresses reputational harm (George, 2017: 93) and excludes relevant matters like broad policy issues about which spurious claims are often made. Further, in Australia, the Electoral Act 1918 (Cth) currently prescribes a period between 33 and 68 days from the dissolution of the House of Representatives to polling day for Federal Elections. The campaign period is therefore short, yet cases under the laws of defamation can take months or even years to resolve. Since defamation remedies are largely ex-post, they do little to protect the informational integrity of the protracted preference formation stage of an election. Defamation law primarily provides a remedy of damages long after the fact, when what is really needed is the speedy withdrawal and retraction of misleading political advertisements to prevent damage prior to the election. Penalties imposed in defamation therefore offer unsatisfactory recourse to a disgruntled candidate whose campaign was significantly affected by the false statements; the public will undoubtedly be equally dismayed to learn that their irreversible voting choices were based on faulty information. Although it might be said that an interlocutory injunction is available as an alternative to damages, as Hunt J stated in Church of Scientology California Inc v Reader’s Digest Services Pty Ltd: ‘an injunction will not [be granted] … which will have the effect of restraining discussion in the press … of matters of public interest or concern’ ([1980] 1 NSWLR 344, 349). For these reasons, defamation law is inadequate and inappropriate to regulate political advertisements like the Mediscare and Death Taxes campaigns.

As political campaigning in Australia becomes more digitised, contested and decentralised, the threshold for candidates to seek a retraction and injunction for false campaign statements should change accordingly. For reasons explained later in this book, we also believe—consistent with the current South Australian regime—that criminal, rather than the civil penalties available under defamation, should apply to these types of statements. Therefore, due to their inability to deal with (a) the seriousness of the false information problem in (b) a timely manner, civil laws of defamation, as they currently stand in Australia, are unsuitable to effectively manage modern, and especially digital, political campaigning.

We now consider the degree to which Australia is able to accommodate TIPA-type laws.

Note

  1. 1.

    Many thanks to Sam Whittaker for his constructive insights on this point.