Introduction

From the Edward Snowden revelations and the Cambridge Analytica scandal to the spread of political disinformation and hate speech amid surging right-wing populism, debates about policy oversight of dominant digital platforms have become commonplace, public, and global. One of the prominent policy threads focuses on antitrust intervention and economic regulation to curb platforms’ market power. In 2020, in the US alone, three antitrust lawsuits were brought against Google and Facebook, though their outcome is hardly certain given the history of judicial deference to the narrowly focused and permissive Chicago School approach to antitrust (Khan and Vaheesan 2017; Wu 2018). However, of the range of proposed policy responses, much of the policy discourse has focused on platforms’ content moderation and speech regulation, whose consequences are the most visible aspect of platform operations. For a number of reasons, including that competition law cannot fully address the non-economic problems platforms pose, while traditional content regulation has difficulty grappling with the sheer scale of content on platforms (Gillespie 2018), there are growing calls for public–private governance regimes, like co-regulation and shared governance (Flew 2018; Gorwa 2019a), to address content and other concerns. Drawing on internet governance frameworks, these proposals envision balancing oversight between private platforms and public groups in enforcing standards whose contours need to be defined (Napoli 2015).

These policy debates often lag behind developments in platform markets that impact the issues policymakers try to address. For instance, the launch of the Facebook Oversight Board, a private–public oversight experiment intended to address content moderation controversies, gives Facebook a first mover advantage over policymakers in defining co-governance frameworks. As big tech antitrust cases were introduced, dominant platforms made record profits as scores of quarantined people turned to their services amidst a global pandemic, revealing platforms’ expansive reach into social infrastructure and across multiple sectors—a process called platformization (van Dijck et al. 2018), which animates their accumulation of platform power. Platformization is not just driven by technological innovation and business strategies, but also by platforms’ ability to influence policy to their advantage. Although this influence has met with pushback at the policy level internationally, it continues to inform policy debates, including through lobbying and policy communications (Popiel 2018; Yates 2021).

While designing policy solutions has received significant scholarly attention, platforms’ policy activities remain understudied. In this chapter I outline key features of platforms’ policy communications and discuss their implications for articulating platform oversight, particularly for defining public–private frameworks and their jurisdictional boundaries.Footnote 1 I begin by reviewing governance challenges posed by digital platforms. Then, I outline public–private regulatory arrangements that receive growing attention as governance solutions to the problems associated with digital platforms. Next, I consider platforms’ efforts to influence policy debates about these governance approaches by tracing key features of US tech giants’ policy communications, namely of Amazon, Apple, Facebook, Google, and Microsoft. I focus on: (a) the policy issues they engage; (b) their policy preferences; and (c) their regulatory and governance philosophies. Among other features, these philosophies converge on frictionless regulation: tech-defined, constantly evolving, and narrow regulatory oversight, matching the speed of tech markets at the expense of the deliberative responsiveness to the public, typical of democratic governance. I conclude by reflecting on this paradigm’s implications for policy debates about platform oversight, including co-regulatory frameworks, and the tradeoffs it introduces for policymakers.

Governing Digital Platforms

Digital platforms have become “central gatekeepers of news and information” (Napoli and Caplan 2017), accumulating market power while mediating most of our daily interactions online. For instance, a handful of digital platforms capture a growing portion of global advertising expenditures. As gatekeepers, companies like Google and Facebook represent a major source of news for online users (Iosifidis and Andrews 2019; Park et al. 2020), compounding a growing power imbalance between digital platforms and news publishers (Australian Competition & Consumer Commission 2019). Such features characterize “platform capitalism” (Srnicek 2017) and fuel scholarly and policy attention to developing platform oversight frameworks. However, as Fay (2019) notes, “while platforms are pervasive in everyday life, the governance across the scale of their activities is ad hoc, incomplete and insufficient” (p. 27).

Digital platform markets pose unique challenges for policymakers. First, platforms’ global span makes policy oversight more complex. For instance, digital platforms’ business models and operations introduce problems that are international in scope (e.g., cyberterrorism) and require state coordination to address. Moreover, a state’s policy governing platforms may involve both domestic and global stakeholders, including foreign governments, platform companies, civil society groups, and users. Likewise, platform-related policy debates have international dimensions, including around data regulation and content moderation. For example, Google’s decision to pay selected news publishers in Germany, Australia, and Brazil fuels related policy debates in other countries (e.g., Hunter and Duke 2019). Meanwhile, efforts to assert state governance over digital markets, such as the European Union’s General Data Protection Regulation (GDPR) and the Digital Single Market strategy, involve geo-political calculations, for instance involving trade. These debates and initiatives suggest a combination of policy transfer, or the travel of policy approaches across national jurisdictions (Dolowitz and Marsh 1996), and concurrent policy translation, or the mutation and adaptation of policy frameworks to different state contexts (Stone 2012; Mukhtarov 2014). These sometimes opposing forces can yield limited policy harmonization internationally.

Second, domestically, the processes of media convergence and digitization have blurred traditional jurisdictional boundaries in communications policymaking (Flew 2016; Krämer and Wohlfarth 2018). These regulatory frameworks do not map easily onto platforms, which elude traditional definitions of media and telecommunications. Additionally, simply applying established forms of content regulation governing legacy media to digital platforms poses challenges for regulators (Flew 2018). From a practical standpoint, the sheer scale of content, data, products, and users on platforms makes regulatory oversight and enforcement technically challenging (e.g., Gillespie 2018; Klonick 2017). In the economic domain, platforms’ multi-sided market structure and seemingly “free” services enable platforms to elude traditional antitrust triggers (Khan 2017; Coyle 2018). Concurrently, remaining disciplining tactics have often taken the form of massive fines for privacy violations, which ultimately represent a fraction of these companies’ annual revenues, are written off as cost-of-doing-business, and fail as a deterrent.

Private–Public Governance

Given the challenges arising from the global scope and scale and the limits of existing regulatory frameworks for state-driven policy oversight, concepts like “soft law,” shared governance, and co-regulation have gained attention as potential solutions (Flew 2018; Gillespie 2018; Gorwa 2019a). Such governance frameworks are posited as a “third way” (Gorwa 2019b, 11) between self-regulation and external oversight regimes like EU’s GDPR, which regulates platforms’ data practices. Drawing on international law, soft law involves “the use of quasi-legal processes, including rules, norms, guidelines, codes of practice, recommendations and codes of conduct, which are typically applied at an industry level, to enforce appropriate corporate behaviour” (Flew 2018, 29). Similarly, co-regulation essentially strikes a balance of power between the public and private sector, allowing regulators to “set the general rules and laws, and industry can oversee the operational dimensions of their application, subject to oversight from the government regulators and the parliament” (Flew 2018, 28). For instance, the 2014 “EU Internet Forum” involved a collaboration between EU governments and prominent digital platforms to develop a code of conduct for addressing online hate speech (Gorwa 2019a). Co-governance denotes more expansive oversight that does not require government participation, as in the case of the Global Network Initiative (GNI)—a set of free expression-related standards and practices co-produced by human rights NGOs and digital platforms (Gorwa 2019b).

The allure of such arrangements is their purported responsiveness to “the rapid pace and development of the platform ecosystem, as well as the dynamic nature of the platform companies” (Gorwa 2019b, 12), and their ability to surmount the challenges associated with the jurisdictional issues platforms pose and the volume and scale of information flowing across their services (Flew 2018). They are designed to be pliable and can be legally binding and enforceable. However, they also require meaningful oversight, a balanced allocation of governance rights between the parties comprising these arrangements, and effective sanction mechanisms. For instance, Facebook’s Oversight Board is a private initiative that self-imposes and circumscribes external oversight on specific content-related matters by Facebook-appointed civil society members and academics (Arun 2020). The initiative blurs the line between co-governance and self-regulation raising important questions about how these types of arrangements will work in practice.

Yet, such power-sharing arrangements contribute to the growing privatization of internet governance (Freedman 2012; Musiani 2013). As Freedman (2012) argues, these third way frameworks represent “a willingness to outsource a range of responsibilities that were previously carried out by the state but that have now been subcontracted to non-state organisations. … The preferred mechanisms of contemporary governance regimes are increasingly self-regulation” (p. 100). These policy choices impact information flows and private control over them. Undoubtedly, deciding upon and implementing such arrangements are pressing political questions, including for platform companies.

Digital Platforms as Policy Actors

Digital platforms are active policy actors strategically shaping policy debates to advance their business interests. Indeed, private firms with sizable market power have a range of political tools at their disposal, including campaign funding, lobbying, and recruiting former regulators and policymakers (Teachout and Khan 2014). Over the last decade, digital platforms increasingly lobbied the state on a growing number of policy issues, mirroring their expansion into new markets, as well as hiring former government officials in a practice known as the revolving door (Popiel 2018). These lobbying expenditures have grown amid US antitrust scrutiny (Romm 2021). From managing regulatory investigations and data-related scandals to advising governments on policy issues like cybersecurity and collaborating with them on contact-tracing apps to track and manage the transmission of COVID-19, tech giants like Amazon, Apple, Facebook, Google, and Microsoft have fundamentally established themselves as powerful political actors (Clark 2020; Byford 2020; Popiel 2018). Moreover, these companies regularly lend their services to political campaigns, actively participating in electoral politics (Bossetta 2020; Kreiss and McGregor 2017).

Digital platforms’ political activities are not neutral. They reflect the often-contradictory mix of social liberalism with a libertarian stance on economic regulation that constitutes the tech sector’s historically rooted ideology (Turner 2006). This “Silicon Valley ethos” (Levina and Hasinoff 2017) both guides and helps explain digital platforms’ specific political preferences and actions. It manifests in their PR activities (Popiel 2018) as well as in subtler forms, like funding academic research that supports their policy stances (e.g., Gouri and Salinger 2017). More importantly, these ideologies continue to resonate with policymakers, particularly the idea that technology naturally produces innovation and socio-economic benefits, which regulation would obliterate. Tech giants strategically deployed these discourses, exploiting gaps in existing policy frameworks and claiming they are not media companies but platforms (Gillespie 2010; Napoli and Caplan 2017), serving as neutral “conduits for the communication activities of others” (Flew et al. 2019, 45), to evade regulations governing traditional media and telecom sectors.

If platform ideologies and imaginaries define the contours of platforms’ policy activities, understanding their policy preferences and governance philosophies can help denaturalize and illuminate the significant sway platforms continue to wield over policy debates about the very frameworks meant to oversee them. Platforms’ policy communications, via dedicated public policy blogs and op-eds by their CEOs and top executives in prominent news outlets, provide clues about these preferences. I draw on a case study of these policy communications in 2019 (Popiel and Sang 2021) to provide an account of platforms’ policy preferences and their implications for debates about platform governance frameworks.

Digital Platforms’ Policy Communications

Platforms’ policy communications resemble PR “which attempts to participate in and shape a public conversation that is held in the media sphere” (Cronin 2018, 10), and to maintain legitimacy with multiple stakeholders (Hill 2020). Platforms’ policy blog posts and op-eds communicate to policymakers, civil society, potential competitors, and the public. They directly express these companies’ policy preferences by articulating stances on specific issues or by attending to policy issue areas deemed important. Cumulatively, they also communicate platforms’ ideas about platform governance Fig. 7.1.

Fig. 7.1
figure 1

(Source Popiel and Sang 2021)

Distribution of 2019 policy blog posts (n = 238) by company

Digital platforms address a breadth of issues, with a few receiving the majority of attention and the majority receiving very little, following a long tail distribution (see Fig. 7.2). The breadth reflects both growing platformization, including expansion to other sectors, and attendant imbrication in a patchwork of regulatory arenas, absent a single oversight entity. The most frequently referenced issues reflect the most politically salient ones (e.g., content moderation and privacy) and, since these are especially central to Facebook’s operations, they also reflect the company’s prominence as a policy communicator. Indeed, as Fig. 7.1 shows, Facebook is the most active communicator, while Amazon the least active. However, Microsoft is the most vocal about specific initiatives it supports or opposes and most diverse in the policy areas it engages, ranging from broadband deployment and environmental protections to a host of socio-economic issues like housing subsidies for lower-income families, addressing unemployment, and investment in education. This breadth of engagement may stem from the company being older than the others, and engaging politics since the 1990s, particularly around the US antitrust case against the company (Chandresakaran and Mintz 1999). Ultimately, these policy communications also address policy topics not frequently associated with platforms (e.g., agriculture) that nevertheless intersect with their operations, suggesting the expansiveness of platformization and of what platform governance might denote, beyond familiar concerns like content moderation.

Fig. 7.2
figure 2

(Source Popiel and Sang 2021)

mentions frequencies of the top 25 policy issues referenced in the policy blogs and op-eds

These policy communications: (1) describe platforms’ own policy initiatives to deal with platform-specific issues (e.g., automated hate speech detection); and (2) express preferences on policy approaches (e.g., calls for immigration reform). The former, which constitute the majority, indicate platforms’ preferred policy approaches and initiatives to addressing issues from misinformation and hate speech to privacy and artificial intelligence (AI) regulation. These approaches conform to a pattern: engaging and partnering with other stakeholders (e.g., civil society organizations, policymakers, academics); championing technical solutions (AI and machine learning), supplemented with staff hires to assist those technical efforts; and implicitly or explicitly championing self-regulation with external oversight. Thus, platforms express support for public–private governance regimes in place of state-level regulatory intervention and engage in building them to forestall such intervention. These public–private partnerships, proposed or formed, range in scale from local, city-level to international. Taken together, platforms’ policy approaches and initiatives communicate the features and frameworks underlying their philosophy of self-regulation: tech-driven efforts, with liability dispersed through networks of engaged stakeholders, and some degree of external, independent, and often non-governmental oversight.

In terms of the latter, platforms communicate both specific policy preferences and preferred general approaches to governance (see Table 1). With respect to governance, drawing on their experience in policy areas like content moderation and on the multi-stakeholder model that characterizes internet governance (DeNardis 2014), platforms tend to support public–private partnerships, often with civil society organizations. While these partnerships are open to governments, they are often international in scope like platforms’ business operations and the nature of problems they intend to address. For instance, to promote cybersecurity, Microsoft supported “a multi-stakeholder model, with governments, industry, academia and civil society” (Frank 2019, para. 2). Likewise, Facebook collaborates with various NGOs to combat violent extremism on its platform (Facebook 2019b). These partnerships benefit platforms by allowing them to traverse individual nation-state policy regimes with varying jurisdictions and interests, and to coordinate policy initiatives at the international level at which they operate.

Contrary to accounts of Silicon Valley as libertarian (Turner 2006), digital platforms are not opposed to state intervention in specific policy areas, though they carefully seek to define the terms of those interventions. Influenced by the Silicon Valley ethos (Levina and Hasinoff 2017), particularly the sector’s imperative to “move fast and break things” with minimal government restraint, these companies frequently push for frictionless regulation: a necessary, but minimally invasive form of state oversight. Microsoft’s Brad Smith and Carole Ann Browne (2019) best articulate this regulatory approach, modeled after Silicon Valley business models:

there’s a strong case for governments to innovate in the regulatory space in a way that’s like innovation in the tech sector itself. Instead of waiting for every issue to mature, governments can act more quickly and incrementally with limited initial regulatory steps—and then learn and take stock from the resulting experience. Just as for a new business or software product that ships as a “minimum viable product,” the first regulatory step would not be the last.

If governments can adopt limited rules, learn from the experience, and subsequently use what they learn to add new regulatory provisions—much as companies add new features to products—it could put laws on a path to move faster. Officials must still consider broad input, remain thoughtful, and be confident that they have the right answers for at least a limited set of important questions. But by bringing some of the cultural norms developed in the tech sector into the regulation of technology itself, governments can start to catch up with the pace of technological change. (para. 18–19)

Thus, such frictionless regulation accepts errors, prioritizing quick action and experimentation over carefully designed regulatory frameworks. As Google CEO Sundar Pichai put it, such regulation “can provide broad guidance while allowing for tailored implementation in different sectors” (Pichai 2020, para. 10). It can “set baselines for what’s prohibited” (Zuckerberg 2019, para. 6), but also implicitly revise them, maximizing flexibility of action for the regulated firms and not imposing undue friction on their operations. This regulation is not “a singular end state; it must develop and evolve. In an era (and a sector) of rapid change, one-size-fits-all solutions are unlikely to work out well. Instead, it's important to start with a focus on a specific problem and seek well-tailored and well-informed solutions” (Walker 2019a, para. 6).

Frictionless regulation is light, narrow in scope, confined to baseline standard-setting, receptive to the private sector’s ongoing feedback and therefore control, and thus overwhelmingly favors quick responsiveness to the market over the slow and deliberative responsiveness to the public, typical of democratic governance. The approach embraces the power-sharing logic of soft law, co-regulation, and co-governance frameworks. However, it also narrows them in scope, with tech specifying the general areas where it should be applied, and reduces the state’s role, with tech defining the mode of regulation (e.g., standard setting). Thus, frictionless regulation ultimately constricts state intervention to the smallest regulatory footprint, namely providing crucial baseline coordination that enables smooth platform business operations. By maximally reducing regulatory friction, the proposal tips the power-sharing regulatory balance toward big tech platforms, increasing their ability to influence policy.

Overall, digital platforms call for global over local standards in the following policy areas: AI, data, election integrity, privacy, free expression, and addressing terrorism. These calls reflect the global span of their business operations and the growing political scrutiny they face in these areas. Moreover, the lack of international consensus on the norms circumscribing regulation in these areas (DeNardis and Hackl 2015) has resulted in governance inconsistencies and sometimes high-profile controversies (Gillespie 2017; Perotti 2017), impacting digital platforms’ bottom line. For instance, Google stressed the importance of governments clearly delineating “between legal and illegal speech [since absent] clear definitions, there is a risk of arbitrary or opaque enforcement that limits access to legitimate information” (Walker 2019b, para. 5). In an op-ed on how to regulate the internet—itself reflective of platforms’ political power—Mark Zuckerberg emphasized that “a common global framework—rather than regulation that varies significantly by country and state—will ensure that the Internet does not get fractured, entrepreneurs can build products that serve everyone, and everyone gets the same protections” (Zuckerberg 2019, para. 12). Since international coordination is the product of nation state policy, platforms see a crucial role for governments in harmonizing standards.

Policy Preferences

Aside from these general features of platforms’ policy preferences that prioritize multi-stakeholderism, frictionless regulation, and international norms and standards, platforms also support specific policies, some of which aim to define these norms and standards. For instance, one theme running through the policy blogs is a preference for a US-based approach to speech regulation and to competition law (e.g., Cunnane and Shanbhag 2019; Facebook 2019c). By being more permissive than other models (e.g., EU), US-based speech and antitrust laws place smaller demands on content moderation and require less oversight of these companies’ mergers and acquisitions, respectively. However, this may change as US regulators experiment with stronger enforcement in platform markets. Conversely, platforms support a strong privacy framework, often invoking EU’s GDPR as a model, and data portability regulations. Such regulations make “companies minimize the data they collect about people, specify the purposes for which they are collecting and using people’s data, and [ensure] they use personal data appropriately” (Brill 2019, para. 8). However, they also impose significant costs on smaller competitors, who may not have the resources to easily comply, giving dominant platforms a competitive advantage, and represent an effort to systematize data norms internationally.

Additionally, platforms support basic worker protections, like raising the minimum wage (though references to unions are notably absent), immigration reform, housing subsidies and investment in education. These preferences not only reflect the socially liberal politics associated with the tech sector (Broockman et al. 2017), but also a strategic investment in its principal work force, particularly STEM education and liberal immigration policies that enable drawing talent from the global labor pool. Platforms also call for investment in infrastructure and bridging the digital divide. Though Microsoft is the most vocal on this issue, all five platforms examined here benefit from greater connectivity since it translates to more users of their services and since they themselves invest in internet infrastructure and cloud services (Mosco 2017).

Platforms see climate change as a pressing global challenge that requires immediate and decisive action. Though several platforms offer their services to big oil (Cole 2020) and their data centers significantly harm the environment (Hogan 2015), these calls appear to extend beyond PR to a recognition that “climate change is more of an existential issue” (Facebook 2019d, 14), as Mark Zuckerberg put it. Finally, they want governments to be more transparent around cybersecurity concerns and to open their datasets to the public. Federal data-sharing can jumpstart new tech markets that use this data, “stimulate better‑informed public‑sector and civic efforts to match the skills needed for new jobs [and] accelerate the adoption of open‑data models” (Smith and Browne 2019, para. 20), thus fueling the data economy the platforms dominate.

Implications for Platform Oversight

This chapter has traced digital platforms’ policy communications and preferences within the context of policy debates about designing platform oversight frameworks. These communications cover a breadth of issues, reflecting ongoing platformization and economic expansion, and function to influence policy debates, to advance business interests, and to promote policy-related initiatives to minimize the likelihood of state intervention. Cumulatively, they define a vision of platform governance marked by the prominence of tech solutionism (Morozov 2014), namely using technical tools to address non-technical problems; frictionless regulation, namely the narrow application and a retooling of state regulatory mechanisms to match the needs and dynamics of the tech sector over those of the public; and an embrace of multi-stakeholderism, at least in principle, to legitimate business decisions and disperse liability. Although often associated with a libertarian aversion to regulation (Turner 2006), digital platforms see governments as crucial to coordinating international norms and standards, like delineating the bounds of data collection, and to providing a range of basic labor protections, while facilitating recruitment from the global tech labor pool. Thus, states have an indispensable role to play, and platforms aim to define that role.

These preferences have oversight implications. Technical solutions implicitly downplay more structural interventions. For instance, AI-based speech moderation is less invasive than antitrust intervention into the data-based business models that thrive on the spread of inflammatory speech on platforms in the first place. Frictionless regulation, though a form of state oversight, is designed to be maximally responsive to platforms’ needs compared to more deliberative, processual, and therefore slower public interest regulation. The fact that platforms attempt to influence the design of regulatory mechanisms goes beyond risks associated with typical regulatory capture, namely influence over regulatory outcomes. State regulation resembles self-regulation if platforms dictate its terms. Finally, while in principle multi-stakeholderism expands participation in governance processes, it does not guarantee more legitimate policy outcomes by itself. Unless stakeholders have meaningful decision-making rights, including the ability to sanction platforms for bad behavior, their participation will be more symbolic than meaningful. More importantly, such initiatives are platform-led, with platforms in a privileged governance position, enjoying limited accountability beyond potentially public disputes over content takedowns. In this context, experiments like Facebook’s Oversight Board work to legitimate a privately-led co-governance regime, offsetting individual private content moderation decisions to public board members, while keeping the business model that gave rise to them intact. Thus, debates over co-regulatory frameworks between states and platforms raise questions about (a) where technical solutions are more appropriate than structural interventions; (b) how to design regulatory rules and their enforcement in a way that is resilient amidst sectoral fluctuations, effective in achieving policy goals, and prevents capture; and (c) how to define rules for participation of a broad range of stakeholders, while delineating areas of coordination and collaboration.

More generally, the discussion of platforms’ policy preferences highlights key policy trade-offs. On the one hand, governments benefit from co-regulatory scope and scale, namely the advantage of dealing with a few large companies instead of many to address concerns like cybersecurity. Platforms prefer this approach since it makes antitrust intervention less likely. As Zuckerberg argues: “it’s a lot easier to regulate and hold accountable large companies like Facebook or Google, because they’re more visible, they’re more transparent than the long tail of services that people would choose to then go interact with directly” (Facebook 2019a, 7; see also Clegg 2019). However, aside from risks of capture, the pervasiveness of these companies across multiple markets also makes regulatory interventions particularly challenging. Thus, on the other hand, structural separation and strong economic regulations likely should precede any co-regulatory arrangements, which intensify interdependencies between co-regulatory parties, namely policymakers, civil society, and platforms. Without such intervention, platforms’ existing size and influence suggests that these co-regulatory arrangements are at risk of tending toward frictionless regulation, namely platform-directed state intervention.

Platforms’ policy communications strategically set the policy agenda on their terms, not just by expressing specific preferences aligning with platform business interests, but also by exploiting policymakers’ reliance on their policy input, particularly in co-regulatory approaches, to divert from certain policy options, like antitrust intervention. While certain platform policy preferences may align with public interests, contesting their influence over policy debates is a pressing first step in asserting democratic oversight over these markets.