Keywords

In recent years, questions of internet governance and digital platform regulation have moved from being a specialised niche field within internet and digital media studies to being at the forefront of scholarly, policy and community debate. The triggers for this have been many and varied. The 2013 Edward Snowden revelations about the extent of U.S. National Security Agency (NSA) monitoring of not only U.S. citizens, but platform users and political figures around the world. The close connections revealed between the NSA and many of the world’s leading digital technology companies, made it clear that there was no structural separation in practice between the global internet, private capital and the surveillance agencies of democratic nation-states. This influenced Europe’s eventual adoption of its General Data Protection Regulation (GDPR) (Rossi, 2018). From this time, national governments gave increasing attention to setting their own rules about data sovereignty and privacy rights, as Brazil did in passing its Marco Civil da Internet (Brazilian Civil Rights Framework for the Internet) in 2014, and as Canada did from 2015 with data localisation laws. This period also saw the birth of the global Indigenous data sovereignty movement (Kukutai & Taylor, 2016) and the European Union taking an increasingly activist role in internet regulation within its jurisdiction, first with the GDPR implementation in 2018, and then the Digital Services Act and Digital Markets Act, tabled in 2020 and designed to set stricter rules and accountability frameworks for what the European Commission termed Very Large Online Platforms (VLOPs).

Evidence of platform companies’ significant market power and their negative impacts on competition have spawned long-running anti-trust cases, such as the EU’s much debated action against Google’s shopping search discrimination (Eben, 2018; Schechner, 2021) and led to calls for tech company breakups (Gilbert, 2021). There are certainly industry commentators who have questioned attempts to define platforms for competition purposes (O’Connor & Schruers, 2016). However, there is now academic consensus that they constitute dominant software systems with a global reach that support multi-sided markets, linking geographically dispersed parties in trade, communication, social and cultural activity, and that there is a great need to better understand these organisational forms and their impacts in order to ensure they operate in, and within, the public interest (Gawer, 2021; Gorwa, 2019; Flew et al., 2019).

Notably, the tangible harms arising from social media misinformation, hate speech, terrorism, abuse and harassment have spurred both the introduction of punitive national laws, such as Germany’s Network Enforcement Act (2017) and Australia’s Sharing of Violent Abhorrent Material Act (2019), and major public inquiries, such as the U.S. Select Committee on Intelligence hearings about Russian influence on the 2016 election and the UK’s 2017 Online Safety and 2021 Online Harms inquiries. While the growth of national initiatives to address social media harms has fuelled concerns about the emergence of a legal ‘splinternet’ and increased regulatory burden on platform operations, it is now apparent that governments see the future of safe, accountable, equitable internet communications and trade as reliant on new controls on platform power and influence. The November 2021 ‘Facebook papers’ revelations that Facebook, Instagram and WhatsApp operations prioritised profit before public safety, amplifying instead of removing harmful content, often against employee advice, has intensified calls for greater regulatory oversight (Satariano, 2021).

Speaking at the Opening Plenary of the 2018 Internet Governance Forum, French President Emmanuel Macron flagged the need for a ‘third way’ in internet governance, between the perceived libertarianism of Silicon Valley and the authoritarian statism of the Chinese internet, arguing that platform regulation to restore accountability and trust was a pre-condition for maintaining the values of freedom and democracy associated with the early vision of the open internet (Macron, 2018). One approach to this can be seen in the European Commission’s 2016 voluntary Code of Conduct on Countering Illegal Hate Speech Online, now in its fifth year (European Commission, 2021), which was introduced to stem a tide of abuse against immigrants during the 2015–2016 refugee mass migration to Europe. This content moderation monitoring governance exercise has seen all the major platforms cooperating with NGOs across Europe to act on their reports of hate speech, with the EC evaluating how well platforms are meeting their Code commitments.

The inspiration for this collection of essays came from such apparent paradigm shifts in understandings of the internet and its socio-economic and political role around the world. As the editors of this book, we are the beneficiaries of a research grant awarded by the Australian Research Council (ARC) through its Discovery Program, on Platform Governance: Rethinking Internet Regulation as Media Policy (DP190100222), along with Tim Dwyer and Chunmeizi Su (University of Sydney), Nicolas Suzor (Queensland University of Technology), Josef Trappel (University of Salzburg), and Philip Napoli (Duke University). We set out to explore the shifting balance between media policy and platform self-governance in the way digital platforms managed their commitments to both free speech and public wellbeing, and the issues arising from nation-state governance of platform content, including the prospects for developing international laws, norms and standards through multi-stakeholder approaches. We also hoped to develop an interdisciplinary understanding of how national media laws, systems and industry cultures continue to shape the practices and conduct of global platform companies operating in multiple jurisdictions.

During our research we were struck by the extent to which politicians, governments and regulators around the world had become increasingly activist towards the largest digital companies in particular, as part of what was described as the ‘techlash’ (The Economist, 2018) and the ‘neo-Brandeisian’ movement to revise antitrust laws to take on ‘Big Tech’ (sometimes also called ‘hipster antitrust’) (Khan, 2018; Rogoff, 2018; Wu, 2018). It was also apparent that industry self-regulation, or ‘regulation by public apology’ (Hall, 2020; Tufecki, 2018), was seriously inadequate in the face of public shocks such as the Cambridge Analytica scandal that The Observer and The Guardian broke in 2018, and the livestreaming of the Christchurch Mosque shootings in 2019. Even Facebook (now Meta) CEO Mark Zuckerberg came to concede the need for regulation of businesses such as his own, acknowledging to the U.S. Congress in 2018 that ‘the real question, as the Internet becomes more important in people’s lives, is what is the right regulation, not whether there should be or not’ (Zuckerberg & Senate Commerce, Science and Transportation Committee, 2018).

Yet this apparent regulatory embrace belies the effort platform companies have put into fighting attempts to regulate them. As far back as 2012 they successfully encouraged their global users to protest against the proposed U.S. Stop Online Piracy Act (SOPA) and the Protect IP Act (PIPA) (Benkler et al., 2015). They have marshalled support from free speech NGOs like the Electronic Frontier Foundation and Freedom House against proposals to force them to take more responsibility for what their users post and share. Both Google and Facebook have withdrawn services in response to new national legislation. Google withdrew its news services in Spain and Germany, following attempts by news publishers to negotiate payment for its display of their headlines and excerpts. Facebook withdrew services in Australia following the government’s 2021 introduction of a News Media Bargaining Code (NMBC), banning Australians from accessing news via its platform, and the world from accessing Australian news accounts. Further, the structural complexity of platform eco-systems, and their interdependence with economic, political and social systems, has made traditional approaches to media and information regulation relatively ineffective, if not obsolete, and new governance strategies essential (Van Dijck, 2021).

There is thus a ‘new regulatory field’ (Schlesinger, 2020, p. 1558) which has emerged around digital platform companies’ colonisation of the internet, our multi-faceted adoption of social media tools, and platforms’ relentless ‘datafication’ of personal information (Meijas & Couldry, 2019). Dwayne Winseck has observed ‘a dizzying number of public policy inquiries into the digital platforms’ (Winseck, this volume), seeking to understand the scope of their influence, and their potential for harm. New laws, policies and regulations are being proposed by nation-states, in the liberal democracies as much as in less democratic states, that overlay an already complex web of standards, protocols, rules, regulations and governance structures which have been associated with the global internet since the 1990s (Mueller, 2017; Musiani et al., 2016). Even the United States, long the key advocate of the open, unregulated internet, has experienced an apparent paradigm shift, with the Biden administration giving key policy roles to critics of digital platform power such as Lina Khan and Tim Wu (Flew & Gillett, 2021), while erstwhile ‘free market’ advocates such as the Stigler Center at the University of Chicago call for stronger anti-monopoly laws in order to revive innovation in the digital economy (Stigler Center for the Study of the Economy and the State, 2019). Indeed the lines between democratic and authoritarian states are blurring in this space, with countries such as China adopting antitrust laws inspired by U.S. policy debates, in order to rein in the perceived market power of their own dominant platforms (Kasperkevic, 2021).

In this book, we focus on digital communication platforms. This is a difficult, yet necessary, distinction to make in discussion of platform regulation, given the platformisation of the internet (Helmond, 2015; Flew, 2019) has occurred in the wider context of the platformisation of business and trade more generally (McAfee & Brynjolfsson, 2017; Parker et al., 2016). There is a plethora of platform companies that are not in communications or media-related businesses, such as Uber, AirBnB, eBay and Upwork. However, big tech companies such as Google, Facebook and Microsoft are clearly in the businesses of communication, especially advertising, and interact with media companies in a sustained way, although this orientation may be less apparent for companies such as Apple and Amazon. Also, the technological lines between platforms and mainstream media have increasingly dissolved. Netflix has revolutionised television through the platformisation of content delivery, shaped by the analysis of connections between user preferences and behaviour, and the use of data and algorithmic selection based on behavioural targeting, which drives content commissioning decisions, but it does so in ways that are recognisably those of a media company (Lotz, 2021). In this instance all other media companies are increasingly looking like Netflix, with their on-demand and streaming video platforms (e.g. Disney+, BBC iPlayer) and data-driven decision-making processes.

While platform companies have long maintained that they are content hosts not publishers, and thus are not media companies in the traditional editorial sense, these lines have also been crossed. The Australian Competition and Consumer Commission (ACCC), in its 2019 Digital Platforms Inquiry Final Report, observed that companies such as Google and Facebook increasingly perform ‘media-like functions’ of commissioning, editing, curating and distributing media content, thus giving them a key role in ‘shaping the online news choices of Australian consumers’ (Australian Competition and Consumer Commission, 2019, p. 173).

In this book, we are interested in the forms of regulation and governance that might be applied to those digital platforms which offer communications and media as a service. This includes both the broad-reach, multifaceted platforms such as Google and Facebook, but also the more narrowly marketed, sector-specific platforms such as Netflix. It includes companies such as Apple, Amazon and Microsoft insofar as they provide communication services and media publishing, such as iCloud, Twitch, LinkedIn and Yammer, or provide media content: think of Apple+ TV and the App Store, Amazon Video and Audible, or Microsoft’s Xbox. The argument that such platforms are media and communications companies (Napoli & Caplan, 2017) is not without its critics: there is debate about this question in our collection (see Pickard and Winseck’s chapters), as well as between authors such as Philip Napoli (2019) and Dwayne Winseck (2020).

The Clinton Administration’s Section 230 of the Communications Decency Act, passed in 1996 and one of the very few articles from that legislation to survive a Supreme Court challenge, is commonly held to be the cornerstone of the platform/publisher distinction, with the idea that ‘internet intermediaries’ (as they were then known) may act to block, remove or downgrade content on their sites without acquiring the legal status of publishers, and cannot be held legally accountable for the content posted by their users (Gillespie, 2017). The argument does, however, go further back in the internet imaginary, with Ithiel de Sola Pool’s foundational 1983 text, Technologies of Freedom, first crystallising the argument that new forms of electronic communication technologies required a ‘policy of freedom’ (de Sola Pool, 1983) that clearly demarcated them from regulation-bound print and broadcasting industries. He envisaged an internet that was inherently, in form and operation, resistant to legal constraint:

Electronic media…allow for more knowledge, easier access and freer speech than were ever enjoyed before…one might anticipate these technologies of freedom will overwhelm all attempts to control them. (de Sola Pool, 1983, p. 251)

Yet early ideas of the internet as a frontier territory, unconstrained and uncontrollable by the rule of law, were never accurate given the numerous international governing bodies and intertwined governance processes that have been necessary to build and maintain the network of networks—and despite John Perry Barlow’s (1996) proclamation to the contrary—never free of tyranny. Waves of regulatory concern about copyright, classification, child pornography, net neutrality and terrorism have generated continual debates about how we might best preserve the liberalising and innovative power of the internet, while acknowledging states’ territorial sovereignty and enabling effective international regulatory efforts (Savin, 2017).

One of the conceptual challenges we have faced in defining activity in this field is whether we are talking about digital platform regulation or digital platform governance. The concept of regulation typically refers to actions by governments and public agencies on private actors that are enabled by binding laws and which have negative sanctions for non-compliance. Koop and Lodge define regulation as ‘intentional intervention in the activities of a target population, where the intervention is typically direct – involving binding standard-setting, monitoring, and sanctioning – and exercised by public-sector actors on the economic activities of private-sector actors’ (Koop & Lodge, 2017, p. 105). By contrast, governance – derived from the Latin verb gubernare, meaning ‘to steer the ship’ – is taken to be associated with a more decentred conception of where power and control lies, encompassing of both the agencies and activities which shape the conduct of actors such as private companies, including those companies’ own attempts at self-rule. Mark Bevir has defined governance in these terms:

Governance draws attention to the complex processes and interactions that constitute patterns of rule. It replaces a focus on the formal institutions of states and government with recognition of the diverse activities that often blur the boundaries of states and society. Governance … highlights phenomena that are hybrid and multijurisdictional with plural stakeholders who come together in networks. (Bevir, 2011, p. 2)

There is a certain natural affinity between the internet and digital platforms on the one hand, and governance practices based on rough consensus rather than formal rules on the other. At a conceptual level, the proposition that decision-making power flows through multiple decentralised networks, nodes and machines sits squarely with understandings of internet culture as being informed by actor-network theories (Latour, 2007), and its reliance on forms of collective coordination (Puppis, 2010) with notions of the internet driving a shift towards network organisations (Thompson, 2003), network economies (Benkler, 2006, 2011), and network societies (Castells, 1996, 2009, 2010, 2012). The internet’s international institutions have never been understood as top-down entities able to impose rules on, and enforce sanctions against, nation-states. Rather, agencies such as ICANN and the Internet Governance Forum are seen as exemplifying principles of multistakeholder cooperation. The institutions involved in global internet governance are framed around tripartite institutional representation, bringing representatives of civil society organizations (NGOs, academics, etc.) and industry bodies to the table, either alongside governments or as an alternative to them (Bray & Cerf, 2020; DeNardis, 2014; Mueller, 2010). Tripartism and multistakeholder approaches have often been preferred frameworks for addressing issues with digital platform companies, such as guiding principles for content regulation, as they avoid the perceived risks of censorship associated with direct state involvement in making decisions in such domains.

At a more general level, governance relations are at the core of platform businesses. As they operate by definition in multi-sided markets, and since the guiding principle of their business model is to enable ‘core interactions between platform participants, including consumers, producers, and third-party actors’ (Constantinides et al., 2018, p. 381), these companies have to establish ad hoc governance arrangements in order to keep all participants and stakeholders engaged and satisfied with their performance and value-adding capacities. As Flew has observed elsewhere ‘a platform without governance is not possible; governance is as central to platforms as are data, algorithms, and interfaces’ (Flew, 2021, p. 135).

The breadth of the governance concept is, however, both its strength and weakness. It undoubtedly captures forms of practice which aim to shape the conduct of others without direct regulation. One thinks, for instance, of the many behavioural ‘nudges’ that are now central to contemporary public policy, where preferred outcomes are achieved by reshaping the ‘choice architecture’ of individuals rather than telling them what they must and/or cannot do (Halpern, 2015; Thaler, 2015). At the same time, governance-based approaches to reshaping the conduct of digital platforms invariably require corporate self-regulation, and raise the question of whether this internal oversight is sufficient to address issues of public concern, or whether it is time for governments to develop stronger rules that have meaningful sanctions for non-compliance.

As with debate about whether communications platforms are media companies, there is a lively debate in this collection about the pros and cons of platform self-regulation. Victor Pickard (this volume) argues that reliance upon corporate self-regulation and social responsibility is always going to be insufficient in the face of business models which promote monopolistic and ethically dubious practices, and that more radical structural reforms—such as the break-up of the big platforms—are required. Closely interrogating the concept of corporate social responsibility (CSR), Lelia Green and Viet Tho Le argue that it can only play a meaningful role if accompanied by state regulation. By contrast, Nicolas Suzor and Rosalie Gillett (this volume) argue that the content moderation decisions of digital platforms will always require a degree of discretion, and that platform self-regulation is always going to be a part of the regulatory mix, even if there are also moves towards more direct government regulation. This is because ‘content moderation and curation is the commodity that platforms offer to their users’ (Suzor and Gillett, p. 274), and the different approaches that they adopt in shaping these governance arrangements is inevitably a part of their business model and the contract they offer to their users and multiple stakeholders.

Platform companies’ increasingly tight grip on digital advertising spend, and the resulting dire consequences for both democratic communications and a news media industry already wounded by plummeting circulation and increased competition, have motivated intense regulatory debate. As the UK’s Cairncross Review argued, the platformisation of news has sponsored market failures with declines in local reporting, political coverage and expensive investigative journalism. Cairncross too found the “unbundled” experience of platform news encounters was having negative impacts on the “visibility of public-interest news and for trust in news” (2019, p. 6). With this in mind, we open our collection with reflections on the types of regulation that might counter the incursions of search and social media platforms on the advertising revenue that once supported journalism.

In the first of our contributions to this collection, North American media studies researcher Victor Pickard explores systemic approaches to supporting public interest journalism in the platform era, ranging from platform company levies to publicly funded media alternatives. He suggests that platforms’ profit first focus, their adherence to an apolitical “marketplace of ideas” conception of free speech, and their embedding in a North American discourse of negative freedoms (ie. against regulation) mean they are unlikely to self-address the structural inequities in voice and influence they entrench. Yet even as Pickard characterises platform companies as “vertically integrated monstrosities, wielding a degree of political power incompatible with a functioning democracy”, he also rejects a resort to anti-monopolist, corporate breakup scenarios. Instead he favours solutions that not only curb platform power to determine news agendas, but also ameliorate the commercial drift of digital publishers to sensationalist, click-driven, socially irresponsible reporting. Amongst those he canvases are what in the neo-liberal moment might be seen as ‘radical’ alternatives: action from unionised platform workers, legislation that regards platforms as public utilities, and the potential creation of public social media.

U.S. media regulation scholars Philip Napoli and Asa Royal then take up the issue of the fraught relationship between platforms and news publishers from a different perspective: that of the press’ legal and political battles to wrest compensation from platform companies for the snippets of news content they display and their users re-distribute. In this account, which reviews long running copyright cases in France and Germany, the EU’s Directive on Copyright in the Digital Single Market has opened the door to at least one content licensing deal, but with terms that are opaque, and which do not acknowledge publishers’ rights in their excerpts. In its coverage of Australian case, based on competition law, the chapter notes how government attempts to mandate platform-publisher negotiations over the value of news led to Facebook’s infamous news ban, demonstrating both its market power and its disregard for civil society. Here too, as in France, we see that deals with Google and Facebook under the NMBC lack transparency and benefit larger companies or those that bargain collectively. While concluding that government intervention seems essential to secure the future of news journalism, Napoli and Royal’s chapter also suggests the difficulty of approaching platform regulation from isolated, issue-based perspectives. In this respect, an integrated approach to media reform of the type proposed by the ACCC’s Digital Platforms Inquiry can likely return better outcomes that individual legislative changes or dependence on platform self-regulation and industry support.

The need for a coherent program of reforms to meet the challenges of digitalisation and platformisation, is part of the narrative legal scholar Amélie P. Heldt presents in reviewing platform obligations under the EU’s proposed Digital Services Act (DSA), and how these are monitored for compliance. Heldt notes that the driving force for the Act was member states individual moves to legislate against online harms, a patchwork of legislation that suggested the EU needed a more uniform approach to intermediary liability and user safety, and rules for removal of illegal content. Under the DSA, platforms are also obligated to provide feedback to the source of removed illegal content about the rationale for its erasure, and an internal complaints handling process for users more broadly, moves the platforms have resisted due to the administrative burden of compliance. However, this aspect of the DSA does not address a key finding of the EC’s fifth hate speech monitoring trial, which found platforms also need to improve their feedback to users who notified them of illegal content, detailing actions taken (Reynders, 2020) a move which would encourage more effective flagging. Where the DSA does innovate, according to Heldt, is in the establishment of two new regulatory agents, national Digital Services Coordinators and a regional Board for Digital Services, which will work in tandem with the European Commission, and in mandating that platform companies abide by the EU Charter of Fundamental Rights in their dealings with their services’ users and competitors.

The question of platform companies’ ‘social license to operate’, and their responsibilities to the societies and communities they serve, has been brought sharply into relief by Facebook’s use in the 2018 genocide of the Rohingya minority (Lee, 2019), and more recently social media’s contribution to the 2021 storming of the U.S. Capitol (Schewe, 2021). In their chapter, communications scholars Lelia Green and Viet Tho Le use former President Donald Trump’s deplatforming after the Washington D.C. Capitol riot on January 6, 2021 as a jumping off point to reflect on the types of social responsibility we might expect from platform companies, as well as the regulatory measures and civic action that might encourage them to better address social concerns and democratic principles. Certainly, we have seen increasing platform attention to responsibility in advertising since the establishment of GARM, the Global Alliance for Responsible Media, a World Federation of Advertisers move to explore the mitigation of “harmful content on digital media platforms and its monetization via advertising” (GARM, 2021) and the 2020 international #StopHateforProfit campaign mobilised Coke and Unilever to support its protest. However, the debate Green and Le engage about what constitutes good corporate citizenship in content publishing and moderation underscores the extent to which platforms are making, via AI filtering and/or rapid human assessment, even more significant editorial decisions once taken by licensed media companies and monitored by national agencies. As Van Dijck et al. (2021) argue, their move to deplatformisation, the wholesale preventative removal of dangerous individuals and their organisational networks, “exposes an accountability gap” between them, governments and public. It is precisely this type of power they argue, which controls access to the essential infrastructure of global communicative participation, that demands more transparent regulatory intervention at national and supranational levels.

In this respect digital media researchers Nicolas Carah and Sven Brodmerkel, in our collection, present a persuasive case that we also need to know far more about the forms, impacts and consequential harms of platformised advertising and the influence this imparts platform companies, given Google and Facebook’s share of digital advertising spend globally accounts for 28.6 and 23.7 per cent respectively in 2021 (eMarketer, 2021). Using the case of online alcohol marketing, and its new participatory data fuelled platform model, this case study contributes significantly to our knowledge of how platform companies have transformed advertising and ad markets through data analytics and interface design. Their algorithmic brand cultures not only micro-target advertising to user preferences and behaviour, but also encourage vernacular creativity from influencers and users to boost campaign impact. While historically advertising regulation has been concerned with representation of drinking cultures, now they argue we should be more concerned about the opacity of advertising’s reach and influence, the difficulty of understanding who has been targeted, with what, and with what consequences for public health and other socially beneficial outcomes.

Throughout this collection, the contributing authors provide a lively critique of platform companies’ resistance to administrative transparency, and their reluctance to reveal exactly how they intervene in public debates, or what they do to mediate dangerous and risky content. In communications scholar Pawel Popiel’s chapter he provides us with a new lens on transparency, by tracking how the major U.S. platform companies try to influence policy debates: the issues they tackle, the policy approaches they favour, and what their policy communications suggest about their attitudes to regulation and governance. His analysis confirms their interest in technological solutionism, and what he calls “frictionless regulation”, the self-defined, rapidly evolving territory of platforms’ community standards and issue-based (often seemingly ad-hoc) multi-stakeholder engagement. This focus, he argues simultaneously advances their business interests while avoiding structural interventions into their operations or entanglement in lengthy public deliberations as the ACCC’s Rod Simms told European policy-makers recently: “what we’ve observed…is that Facebook and Google, they really just do things on “take it or leave it” terms. They dictate the terms of the arrangement” (Sims, 2021). So while platform companies may accede to national co-regulation in certain areas such as data privacy, Popiel warns that they will move fast to set the policy agenda, with the worrying possibility of state capture by private interests.

A micro-analysis of Facebook and Google’s policy agency, by James Meese and Edward Hurcombe, then reveals how this dynamic played out during the formulation and introduction of the ACCC’s News Media and Digital Platforms Mandatory Bargaining Code. Here, we see a regulatory action that sought to make big platforms pay for news, but which avoided designating either company as actionable under the new law because they both negotiated deals with publishers before that happened. Meese and Hurcombe undertake a close read of the policy process to challenge the common view that the Code benefited big media rather than journalism or media diversity (see Warren, 2021). Their decentred analysis of institutional alignments in industry and political agendas reveals how ongoing stakeholder negotiations led to new regulatory obligations on Google and Facebook, despite their apparent economic power. Their account highlights the need for situationally and historically nuanced accounts of policy development that consider path dependencies as factors in regulatory outcomes.

Chunmeizi Su then takes up this challenge, exploring how Australia might differently consider regulating the activities of North America’s tech giants and their Chinese counterparts Baidu, Alibaba, and Tencent, in light of the latter groups’ growing base of Chinese-Australian users. She notes that while both Facebook and WeChat have generated initiatives to combat misinformation, WeChat is less likely to trigger direct government responses in Australia as it is principally a platform for the Chinese diaspora, whereas Facebook is closer to being a ‘mass’ communication medium.

The chapter focus then turns to the fate of local cultural production markets in an era of platformisation, another critical concern for policy makers with the rise of global subscription video-on-demand (SVOD) streaming services like Netflix and Disney+. Stuart Cunningham and Oliver Eklund highlight the competitive and information asymmetries between the highly regulated, territorially-bound broadcast sector and the relatively unregulated, unbound digital video “curation, aggregation and sharing” sector which have enabled SVOD companies to act as market disruptors in the screen industries, drawing parallels between the regulatory challenges raised by the market dominance of search and social media platforms and those of streaming platforms. Using three case studies, Cunningham and Eklund trace how European, Canadian and Australian regulators have sought to implement digital media policy reform that meets competition, social, cultural and public interest information goals, and differently address the contentious proposition that platforms should contribute financially to local cultural production in return for market access.

Applying a closer lens to the “Netflix Effect”, or the influential market impact of its algorithmic production model, Ramon Lobato and Alexa Scarlata then investigate ‘discoverability’, a key aspect of this model, and its implications for media and information policy. Discoverability, or the mechanisms that act to make content visible to streaming platform users, has become a hot button policy issue due to the potential for some sources and types of content formerly privileged in legacy policy (such as local, minority language and documentary content) to be marginalised on streaming services. In exploring the breadth of editorial and system design factors that govern how content recommendations are made, Lobato and Scarlata rehearse the distributive politics of visibility and then unpack their realisation in national media policies of Canada, the UK, Australia and the European Union. Importantly they question the transparency and contestability of decision-making which shapes the prominence of competing channels and public service media content in streaming delivery.

The preceding two chapters position communications platforms comfortably within the ambit of existing media policy and regulation, which more or less is the proposal that has underpinned our research over the past two years. In contrast, telecommunications researcher and political economist Dwayne Winseck argues that trying to shape platform behaviour along broadcasting principles is mere political expedience, and ignores tech companies’ closer historical alignment with telecommunications, electronics and finance sectors. For Winseck a pre-history of digital information networks suggests four principles on which we should base any future regulatory moves on platform companies: structural separation of large corporations; line of business restrictions; the imposition of public interest obligations and the provision of public service media and communications alternatives.

Whatever the policy framework that we may wish to apply to the conundrums of platform regulation and governance, the question of what role self-regulation and corporate governance should play looms close in a political climate dominated in the West by ‘light touch’ regulatory approaches, neoliberal economics and populist governments. Media law scholars Nicolas Suzor and Rosalie Gillett argue that persuading platform companies to exercise better self-regulation is a critical part of any oversight framework. After consulting a variety of regulatory experts, they argue that self-regulation provides: faster, more flexible, informal means of enforcing content standards, and acting to remove harmful material although these may suffer from a legitimacy-deficit. They also canvas the problems that civil society actors have in influencing platform decision-making and note the need for more effective platform consultation of government and civil society. Critically they note the difficulty of external parties influencing longer term, significant policy directions.

Despite the clear need for platforms to improve their self-governance, at this moment the politics of self-regulation are somewhat on the nose, especially in the wake of Facebook and Instagram’s struggles with COVID19 misinformation and especially since the release of the Facebook papers with their spectacular expose of Meta’s internal policy discontents. It seems fitting then that our final contribution from Terry Flew turns an interdisciplinary lamp on the reasons why debates about tech policy have wandered for decade in the discourse of governance, and now are turning regulatory with some fervour. Building on research into electoral swings and the rise of populist governments, Flew argues that technology policy, once the province of cosmopolitan tech-savvy elites, is now yielding to more conservative forces – leaving information activists torn between options that might curb human rights harms, but may equally curtail free speech.

As the European Union, the Brexited UK, and Canada look to introducing new platform-oriented policy reforms, this collection provides invaluable insights into the lenses that can be applied to those deliberations. It canvases the variety of stakeholders that require consideration and the intricacies of their relationships, gaps in regulatory research and the complexity of the field as it emerges. Thanks to our geographical location, this work certainly foregrounds activity in Australia and its region, but we regard this as an important balance to global north perspectives, and a worthy focus on the shift to national regulatory activism that is informing approaches in Europe and elsewhere.