Keywords

Introduction

“[T]he current context is now fundamentally different, involving the use of our platform to incite violent insurrection against a democratically elected government,” pronounced Facebook founder Mark Zuckerberg in relation to Donald Trump’s use of the platform. Following pro-Trump protesters’ storming of the US legislature on 6 January 2021, tech giants Facebook and Twitter decided that Donald Trump should be locked out of the social network accounts he operated. These actions intensified longstanding debates around the risks that digital platforms may pose to democratic processes (Colarossi 2021). Critics on both the left and the right of the political spectrum have sought to curtail the legal protections that shield internet platforms from being held liable for content posted by people using social media. More recently, also reflecting Facebook and Twitter’s ban on Trump, the platforms are being questioned around their decisions to ban everyday citizens, too. The essential point here is what duty of care is owed by big corporations when platforms operate as integral elements of the public sphere, but cannot be held publicly accountable?

On one side of the argument, the ban applied to Trump operates as a kind of pre-censorship. It raises concerns regarding platforms’ power to moderate online content, their capacity for censorship, and protections for free speech on the internet (Oxford Analytica 2021a, 2021b). The opposing viewpoint argues that big tech has the right to censor content under their terms of service, which operate effectively as a contract between the platform and the people who use it. From this perspective, the platforms should take responsibility for preventing the promulgation of hate speech and disinformation. The polarising of debate concerning Trump’s social media accounts serves as an example that highlights regulatory gaps, and the blurring of policies, governing social media platforms. It also raises the issue of corporate social responsibility (CSR) in terms of digital platforms’ handling of hateful and/or violent messages (Hern 2021).

The aforementioned controversies around power wielded by the tech giants align with related concerns about surveillance capitalism (Holloway 2019; Zuboff 2019), and human and civil rights (Wagner 2018; Katyal 2019). They reinforce long-standing and widespread concerns about personal information privacy and the datafication of society (Van Dijck 2014). Further, such issues are not restricted to the corporate sector and it has become increasingly apparent that the corporate sector is being co-opted into state surveillance practices. PRISM, for example, was a code name assigned to a project run by the United States National Security Agency (NSA) which collected information and data with the support of US internet companies. Leaks by Edward Snowden in 2013 provided ample evidence that Americans’ online interactions, as well as digital data relating to non-US citizens, had been monitored, collected and shared by many US-based digital companies. These organisations provided NSA operatives with direct access to their servers, allowing the collection of personal information relating to billions of people around the globe (Landau 2013; Stoycheff 2016). More recently the Cambridge Analytica scandal, revealed by whistle-blower Christopher Wylie in 2018, demonstrated platforms’ complicity in manipulating users’ perspectives upon politics and democratic processes, potentially impacting actions and behaviours, ethics and political outcomes (Cadwalladr and Graham-Harrison 2018; Isaak and Hanna 2018). Such examples of information, or grey-zone, warfare (Hughes 2020) demonstrate platforms’ significant potential to exacerbate social fragmentation and polarise voters along ideological lines (DiFranzo and Gloria-Garcia 2017).

This chapter explores whether and how civil society in Western democracies can require platforms to take greater responsibility for power they wield in informing democratic deliberation and debate. It asks: what changes would platform media need to make to ‘take responsibility’ in the digital landscape? Exploring existing regulation and legislation, the argument adopts a Corporate Social Responsibility (CSR) perspective to deliver a more robust engagement with human rights and digital citizenship, benefiting individual citizens, and the societies in which they live. It suggests that the platforms and companies supporting major social media sites be constructed as ‘publishers’, and required to take responsibility for harmful content they carry. Specifically, the discussion addresses the following questions:

  1. (i)

    What changes would platform media need to make to ‘take responsibility’ in the digital landscape?

  2. (ii)

    How might the future formation of corporate social responsibility support a more constructive, pro-social engagement with information, data and knowledge?

  3. (iii)

    Can platforms be held responsible for upholding individuals’ rights? and

  4. (iv)

    Do emerging understandings of CSR support more responsible practices by tech platforms?

The ‘Big Five’ Digital Technology Companies

It comes as no surprise to learn that the world’s 10 most valuable publicly listed corporations include the five largest Big Tech companies, all of which are US based: Google (Alphabet, since a 2015 restructure), Amazon, Facebook, Apple, and Microsoft (GAFAM) (Frost et al. 2019; Clement 2021). Together, these corporations comprise what is known as ‘the Big Five’ (Van Dijck 2020). Beyond market value, the Big Five have gained “rule-setting power” (Van Dijck 2020, p. 2), which is to say they operate as the gatekeepers for almost all the western world’s online social traffic and economic activities. Their services influence the very texture of society and impact the processes of democracy. In other words, online platforms are at the core of significant social change and development. They affect—and effect—institutions, economic transactions, and social and cultural practices (Chadwick 2014; Van Dijck et al. 2018).

This is not to say that these five tech companies start from, or pursue, the same approaches to their business models. Facebook and Google are essentially advertising driven and package users’ data to build market share. While they have very different corporate cultures and individual responses to regulatory intervention—most recently illustrated in their responses to the 2021 Australian News Media and Digital Platforms Mandatory Bargaining Code (Leaver 2021)—their income streams depend upon the commodification of their users’ information. Arguably, Apple is predominantly a hardware tech corporation, with Microsoft the dominant player in the software area. Amazon, in contrast, is a digital distribution behemoth, but it’s also willing to enter niche markets (such as the carriage of high-end food retailer Whole Foods) in order to gather more data about particularly wealthy groups of customers, as well as to trial new modes of delivery. Since all these conglomerates are data-driven, there is some business-model slippage across products and services. Amazon’s Alexa is an information-gathering device with a business model more aligned with Facebook and Google, while Apple TV + is more about cementing a hardware market than it is about taking on Netflix. There is some less-than-friendly rivalry between the Big Five. As this chapter goes to press, for example, Apple is in contestation with Facebook about its use of users’ information. Apple has highlighted the different approaches by explicitly seeking users’ consent to share some of their information with third parties (Statt 2021).

The Big Five have all developed from a single ‘big idea’ into huge conglomerates of interconnected platforms, going on to become dominant market players. Historically they built a core product, established its popularity and quickly disseminated it, adding value with aligned services and expanding operations to other sectors, while moving to dominate the market by acquiring potential competitors (e.g. Newton and Patel 2020). Given the cross-border, multi-market scale at which these companies operate, using national policies and laws to effect governance over these companies proves challenging. Further, the power and profit of these platforms operating in and across free market economies allows them to profit from outdated laws and inexact rules that fail the fit-for-purpose test when it comes to regulating digital environments and activities. The significant role that digital platform-driven companies play in the “heart of societies” (Van Dijck et al. 2018, p. 2) forces governments to second guess legal interventions, continually anticipating the next innovation and activity. Conventional regulatory approaches and instruments struggle to safeguard public interests (Nooren et al. 2018).

The European Commission, for example, has attempted a range of regulatory options, including self-regulatory and co-regulatory models (Finck 2017). A recent regulatory attempt proposed by the European Commission is founded upon “principles-based self-regulatory/co-regulatory measures, including industry tools for ensuring application of legal requirements and appropriate monitoring mechanisms” (2016). More recently, their regulation has been backed by significant sanctions and a threat of exclusion from one of the world’s largest consumer markets (~450 million: the world’s third largest population, after China and India). The EU’s General Data Protection Regulation has been law since 2016 (Hoofnagle et al. 2019), and in force since 2018. It is supported by EU Regulation 2019/1150 promoting fairness and transparency for business users of online intermediation services (Anagnostopoulou 2020). These two regulatory tools aim to increase citizens’ control of personal data and to protect civic society from the negative impacts of exploitative and predatory activities by digital platforms and services.

The General Data Protection Regulation changed the European privacy landscape (Hoofnagle et al. 2019) but also propted regulatory ripples worldwide. Among other reasons for this, the EU has a growing track record of enforcing regulation with respect to Big Tech. According to Keane (2015), Google is the world’s largest and most dynamic media conglomerate and its revenue amounted to US$181.69 billion in 2020 (Johnson 2021a), with an operating income of US$49 billion in that year (Johnson 2021b). The platform may seem too big to regulate, but Google was subject to almost US$10 billion worth of fines between 2018 and 2020 for anticompetitive practice in the EU (Whalen 2020). Those kinds of penalty are one way to make platforms take notice. They also offer lawyers and regulators an opportunity to highlight the importance of a CSR ethics in the ways that platforms conduct themselves.

In October 2018 Facebook was fined £500,000 by UK regulators for its shortcomings as revealed in the Cambridge Analytica scandal. In July 2019 the platform, which includes Instagram, WhatsApp and Oculus, also settled a US Federal Trade Commission suit regarding Cambridge Analytica and other privacy issues, agreeing to pay a record-breaking US$5 billion fine while also implementing enhanced privacy measures (FTC 2019). This fine was big enough to see Facebook’s net income drop in 2019, even though revenue increased from US$56 billion to US$71 billion (Tankovska 2021).

In 2018 Google, Facebook, Microsoft, Twitter and others, had joined together to form the Data Transfer Project (DTP 2018a), an initiative of the Google Data Liberation Front (a team of Google engineers), with the supposed aim of creating “an open-source, service-to-service data portability platform so that all individuals across the web could easily move their data between online service providers whenever they want” (DTP 2018b). The idea was that an individual’s content posted on Facebook, for example, could be seamlessly moved to Google + .

While such an initiative might sound impressive, and would be welcomed by many users, it is yet to be delivered. Further, the Data Transfer Project would not address the privacy and data control issues highlighted by the Cambridge Analytica scandal. Users remain subject to unregulated advertising that is driven by the Online Behavioral Advertising model (OBA) which underpins digital platforms such as Google and Facebook, providing ‘free’ services funded without explicit, informed consent by the monetisation of users’ data (Edelman 2020; Torbert 2021). Arguably, given its progress, the Data Transfer Project is little more than window dressing to make the platforms appear to be doing more with respect to CSR ideals.

The operation of digital advertising/surveillance capitalism (Holloway 2019) belies any apparent improvements in platforms’ ethical standards. It does more than construct audiences as “a commodity produced and sold to advertisers to use”, Smythe’s (1981) famous aphorism. OBA allows platforms to construct an image of a specific user’s profile, forming what is termed ‘like-minded audiences’ articulated around features of specific importance to advertisers, including the shadowy covert operations influencing the Trump election campaign and the Brexit Referendum, both in 2016. Effectively a psychographic profiling technique, such ‘digital experience’ services are central to the targeted information delivery approach revealed in the Cambridge Analytica scandal, and integral to hidden, unregulated advertising. Users cannot capture the advertisements they have seen, interrogate them, or examine impacts upon them: which are essentially subliminal (Wachter 2020). This model of advertising operates without clarity or accountability, raising issues around the “overpassing [of] ethical limits in terms of respect for the persuadee, equity of the persuasive appeal, and social responsibility for the common good” (Belanche 2019, p. 685).

The Western policy agenda now reflects global concern around digital platforms’ role and impact relating to the digital economy, privacy and personal data exploitation, misinformation and harmful content, etc., (Flew et al. 2020). Australia’s Digital Platforms Inquiry report (ACCC 2019) is just one example of this concern, and particularly interrogates the impact of digital platforms upon consumer access to quality news and journalism.

This section of the paper has indicated that regulation, backed by sizeable fines, can help make platform media ‘take responsibility’ in the digital landscape (question i), and that corporate social responsibility, including around the regulation of the OBA model, could support a more constructive, pro-social engagement with information, data and knowledge (question ii). The EU’s General Data Protection Regulation actions against Google, and the FTC’s actions against Facebook, both indicate ways in which platforms may be held responsible for upholding peoples’ rights (question iii). Question iv, ‘Do emerging understandings of CSR support more responsible practices by tech platforms’, is addressed in the sections that follow.

Corporate Entities, Capitalism and Democratic Ideals

The Australian Competition and Consumer Commission defines digital platforms as “applications that serve multiple groups of users at once, providing value to each group based on the presence of other users” (ACCC 2019, p. 41). The rapid growth of digital platforms highlights issues pertaining to CSR with an emphasis on the intersection between businesses, digital citizenship, and ways in which such entities are shaped by mutual interaction and mediated engagement with technology (Adi et al. 2015; Gold and Klein 2019; Schultz and Seele 2020; Stancu et al. 2018). The tech giants’ operations necessarily raise issues requiring a CSR response (Grigore et al. 2018). A new CSR model for the digital age, where big tech companies face sanctions if they fail to adhere to a robust Code of Conduct, or an appropriate Code of Ethics, would add value to the implied commitment to CSR in digital discourse permeating the digital economy.

CSR has been defined as “an evolving business practice that incorporates sustainable development into a company’s business model. It has a positive impact on social, economic and environmental factors” (Schooley 2020). Carroll (1991, 2016) suggests conceptualising it as a pyramid model constructed from four (deemed) constituent elements of CSR: Economic responsibility, legal responsibility, ethical responsibility, and philanthropic responsibility. There is no agreed definition of CSR, however. It operates as an umbrella term, in many senses as a buzzword or catch phrase, and is sometimes substituted for, or treated as if it were also referring to, environmental, social, and governance (ESG) aspects of corporate activity. Arguably, there are corporations that might continue to suggest that their only legitimate role is to maximise shareholder value. If they wish to have a social mandate to operate in a post-industrial information society, however, corporations need to be seen to be minimally ethical and avoid flouting standards of acceptable business behaviour. Flagrant disregard of public expectations can exact a significant toll on a company’s balance sheet.

Beck (2019) argues that, nowadays, boycotts are a significant means of social protest against companies. Such boycotts can be called for in response to environmental pollution, violations of standards for workers, mistreatment of animals, etc. As a result, low CSR standards or performance have the power to undermine both profitability and share price, wiping out years of productive work to maximise shareholders’ equity. In the alternative, positive CSR is perceived as supporting sustainability.

Consumers are increasingly aware of their buying power, and the value of their goodwill. Over the years they have become ever more inclined to call for, and participate in, mass boycotts. The 2015 Cone Communications/Ebiquity Global CSR Study found that 91% of global consumers expect companies to operate responsibly, with 84% saying that they seek when possible to consume goods made by responsible companies (Cone Communications 2015). On the investment side, 25% of organisations claim they operate in accordance with best practice standards of environmental, social, and governance principles (Flood 2019). The proportion of companies making such claims is expected to increase by more than double, to between 50 and 65% of all publicly reporting companies, by 2024 (Flood 2019).

When the Cambridge Analytica scandal broke, implicating Facebook in anti-democratic activities, that corporation lost US$45 billion in value over five days (Economist 2018). Although this value was subsequently regained, and retained, despite the FTC fine (FTC 2019; Davies and Rushe 2019), the initial precipitous drop in share valuation is a cogent indication of the risks that corporations run when they lose public trust. As a result of this and other examples, such as Rio Tinto’s Juukan Gorge debacle (Verrender 2020), people working in finance and investments within western contexts cannot ignore the growing zeitgeist that mandates incorporation of CSR criteria into an evolving value equation. This dynamic also reflects the fact that low CSR commitment is an increasing regulatory and legislative risk. The Australian News Media and Digital Platforms Mandatory Bargaining Code, arising from the ACCC’s Digital Platforms Inquiry (2019), is just one recent example. In a world first, it forced tech giants to pay Australian news outlets for their proprietary content when it is accessed, read and shared on social media and by search engine users.

The theoretical foundations of CSR are deeply interconnected with the idea of stakeholder engagement and, according to Freeman and Dmytriyev, it is “part of [the] corporate responsibilities oriented toward all stakeholders” (2017, p. 14). Carroll (1991) argues that, “the concept of stakeholder personalizes social or societal responsibilities by delineating the specific groups or persons business should consider in its CSR orientation” (p. 43). Such obligations impact digital platforms, as they do all other commercial entities. Platforms need to engage end users as well as investors. Given that digital platforms aim to build sustainable businesses, thereby taking economic responsibility, they also need to meet the expectations of their stakeholders, with a particular focus on two core categories of end-user – platform users/audiences and advertisers. Rieder and Sire conceptualise this process as a requirement for businesses to get stakeholders “on board” (2014, p. 199). For digital platforms, this means that the connection between CSR and stakeholders is, if anything, of greater importance because digital platforms operate in the context of a service industry, rather than providing tangible goods. In the same way that CSR forms a nexus for delivering social goods along with economic profits, so CSR connects stakeholders, markets, regulators and digital platforms.

Arguably, CSR has different implications for different market segments and operating conditions. Within the digital environment, CSR may imply that the platforms and related organisations operate to develop and support a conscious sense of an engaged citizenship, within the context of which the platform and its users work with each other to support democracy, free speech and principles of transparency and accountability. Facebook Australia’s decision to restrict news publishing and sharing on 18 February 2021, in response to what the company perceived as an attack on its business model by requiring it to pay for the Australian-originated news content that users post on its platforms, constructed Facebook as an overpowerful bully. While Facebook may have characterised the precipitating introduction of the Australian News Media and Digital Platforms Mandatory Bargaining Code as an act of aggression, that regulatory action had far less perceived impact on the lives of everyday Australians than did the Facebook ‘news ban’ response (Hutchens 2021). Further, given that both Facebook and Google were impacted in equivalent ways, and Google reluctantly complied with the new regulations whereas Facebook (initially) countered and fought them, Facebook highlighted its response as out of proportion to the threat posed to it by Australia’s regulators in the context of its global market dominance.

Facebook appears to have lacked a sense of the implied social licence under which it services Australia’s social media discourse. In protesting the regulators’ actions, it was perceived as harming “community groups, charities, sport clubs, arts centres, unions and emergency services” (Hutchens 2021). Facebook has always been more than a news source because of the operations of OBA. It provides a service that is created in the image of, and harnessed to the production of, information that’s relevant to the interests of every Australian Facebook user, including: friends, families, communities, sports, arts, hobbies and health. It is a community space where ideas are shared and discussed. As well as showcasing news content, Facebook is often mined by news organisations for leads and stories. Further, Facebook’s pages are used to confirm and contextualise what readers and viewers may have seen or heard elsewhere.

Based on the suggested nexus between CSR and stakeholder theory, news organisations and Facebook users are both key stakeholders. If one of the two groups is absent, the demand from the other reduces. If Facebook’s aim is to build a sustainable product, it needs to recognise its responsibilty to the wider Australian community as well as to other groups of key stakeholders. In the end, this is what Facebook did, imperceptibly impacting their profits by negotiating with Australian news producers and supporting the coexistence and growth/sustainability of Australia’s media and journalism industry. Facebook’s temporary attempt to contravene the social contract, the implied CSR licence under which it operates, has been constructed as something akin to ‘an own goal’ in Soccer. As Lewis (2021) noted a week after Facebook’s policy reversal: “the social network’s hostile attack on Australian users reinforces the need to tackle the monopoly power of tech giants”. A stronger commitment to CSR on Facebook’s part would have allowed it to sidestep much of the opprobrium that followed, and would have left the iron fist unseen and unused in its velvet glove. As it was, the organisation opened itself up to wry comments about Facebook’s agreement “to re-friend Australia” (Lewis 2021), and undermined public confidence in Facebook’s understanding and performance of CSR.

CSR, Platforms and Regulators

Digital platforms comprising, among others, Facebook, Twitter, Google, Amazon, etc., have played a vital role in realising critical public values (Helberger et al. 2018) and making them more accessible. The absence of effective legislation and regulation governing the platforms is becoming more evident over time, however. Policymakers and lawmakers struggle to respond, trying to level up power and accountability differentials. Flew and Wilding (2021, p. 48) call it “the turn to regulation in digital communication.” Grigore et al. (2018) suggest “a move from firm-centric orientations to stakeholder-centric orientations, and benefits and risks associated with the use of digital technology” (p. 24). Finck (2017) and Helberger et al. (2018) propose a co-regulation model to address the challenges inherent in cross-border multinational hegemonic organisations. In some ways, such a model recognises the operation of regional regulators attempting to work with and rein in international companies. Much of the newly enacted laws and regulations, in Europe as well as Australia, adopt this approach, making compliance with local law the price of doing business in the local market. In essence, this aligns local stakeholders’ notions of CSR as being interconnected with organisations’ best interests, thus explicitly linking the regulation of digital platforms to their licence to operate in key markets.

This section has indicated how emerging understandings of CSR are supporting more responsible practices by tech platforms, including the Big Five.

Competing Conceptions of Acceptability and Accountability

CSR, as it operates within the context of western democracies, is expected to align with the fundamental tenets of digital citizenship. Generally, attempts to regulate digital platforms begin with market-friendly self-regulatory and co-regulatory models and move along an interventionist scale to arrive at top-down legislative intervention (Finck 2017). The failure of platforms’ self-regulation (Flew and Gillett 2020) is evident in examples such as Cambridge Analytica, both because such self-regulation not only lacks transparency but also because it does not account for the interests of actors other than those that benefit the platform itself (Finck 2017). Self-regulation is comparatively easy to ignore when problems arise that conflict with platforms’ self-interest. Facebook, for example, claims to moderate the content posted on its site to prevent violence, pornography, and privacy violations but the boundaries between what is acceptable and prohibited is not always clear. In Vietnam, for example, Facebook may find itself pressured by state actors to remove or obfuscate dissent, which officials might deem as “undermining national security, social order and national unity” (Banyan 2013). This pressure exists when the content suppressed does not violate Facebook’s publicised community standards. China, similarly, requires platforms to block content deemed illegal or offensive, and punishes platforms and services that don’t comply. As Braw (2021) argues “For firms under pressure from China, it makes little sense to remain loyal to a home country where the share of revenue is often quite small if doing so brings the risk of losing a much bigger market.” Many such state-issued regulations contrast with western ideals of free speech, however, where citizens may argue that platform review of content prior to posting is censorship, and anti-democratic (Gillespie 2017, 2018).

Finck (2017), and Helberger et al. (2018), accordingly propose co-regulation as an appropriate paradigm for future approaches whereby “companies develop […] mechanisms to regulate their own users, which in turn must be approved by democratically legitimate state regulators or legislatures, who also monitor their effectiveness” (Marsden et al. 2020, p. 1). This paradigm is also compatible with a CSR orientation that considers the benefits and risks to stakeholders of using digital technology (Grigore et al. 2018). It encourages CSR by promoting a better understanding of the challenges and risks that digital technologies might raise for stakeholder groups, not only for platforms themselves.

Such discussions take place in a context where has been “little reflection on the responsibilities of digital platforms in the markets in which they operate” (ACCC 2019, p. 1). Meanwhile, there is no clear agreement as to what comprises digital CSR, as the following discussion notes. Further, there is little regulation in smaller markets that is backed up by robust legislation that would encourage the Big Five platforms to change their behaviour. Ideally, a future-facing conception of CSR would embody the principles of open society, civic responsibility, market autonomy and accountability under the rule of law, as well as supporting an enhanced vision for digital citizenship, benefitting individuals, communities and the societies in which they live.

But what happens when democratic ideals clash in irreconcilable conflict? Such a contestation is highlighted by the example of the Christchurch shootings on 15 March 2019, when a gunman opened fire in two mosques in that New Zealand city, ultimately killing 51 people and injuring scores more. The gunman filmed his entire crime, posting it live on Facebook. The footage, which was subsequently copied and widely shared on social media, found its way onto the pages of some of the world’s biggest news sites in the form of images, GIFs and even videos (Macklin 2019). Soon after the implications of the (re)posting were realized as a de facto part of the gunman’s motivation, social media and news sites removed the images. In total, Facebook deleted about 1.5 million videos within the first 24 h of the attacks, automatically blocking a further 1.2 million upload attempts and removed 300,000 additional copies after they were posted (Macklin 2019). The event became a warning to platforms regarding their appropriation for terrorism and violence, and demonstrates the dark side of social media as a facilitator of xenophobia (Crothers and O’Brien 2020).

Jacinda Ardern, New Zealand’s Prime Minister, drew upon models of world’s best practice relating to suicide coverage, extrapolating that the airing of some information might create support for copycat behaviour (Greensmith and Green 2015). She also embraced emerging guidance around the reporting of mass shooters: don’t name the shooter, don’t discuss their politics, focus on victims, support stricken communities, and make change where possible such as banning the weapons and the transmission of the images. Arden is in one corner of a debate around how platforms should perform in terms of CSR. Two months’ later, in Paris, Ardern joined with French President Emmanuel Macron to call for an end to “the circulation of abhorrent material.” Seventeen countries and some tech companies, include Facebook, Twitter, Google, Microsoft and Amazon, responded to the ‘Christchurch calling’ by signing a pledge to stand against online terrorism and extremism.

Australia was deeply implicated in the Christchurch shooting. This was not only because of the very close trans-Tasman connection, but also because the killer was Australian, and Australia had failed to identify him as a terrorist threat (Tarabay and Graham-McLay 2019). In response to the killer’s use of Facebook to publicise his crimes, Australian Prime Minister Scott Morrison said, amongst other things, that his country would do more to regulate international digital media companies. He suggested that organisations cannot be relied upon to do the right thing but require legislation. “It should not just be a matter of just doing the right thing. It should be the law,” he said (Kelly 2019).

Jacinda Ardern has been widely praised for the intent behind the ‘Christchurch call’ and her demand that all footage of the Christchurch Mosque shootings be removed from the internet. In this case, there is a general agreement that images promoting violent hate crimes are unacceptable. There is a widespread uneasiness, however, about legislation that draws a line between what constitutes acceptable and unacceptable digital content. For example, a 2007 attack by a US Apache helicopter killed 12 people in Baghdad, Iraq, including two Reuters staff. The video of that atrocity was posted by WikiLeaks in 2010, calling attention to US forces’ behaviour in the face of perceived threats posed by unarmed civilians. It stimulated debate about Chelsea Manning’s and Julian Assange’s right to publicise footage of US killings, and associated moral issues. These included whether the west was justified in subsequently allowing the screening of Daesh footage of executions (Schmid 2015). While Julian Assange argued for the legitimacy of his actions under a right to ‘free speech’ (Alexander and Stewart 2010), other moral issues raised include whether the Apache helicopter footage might have mobilised US public support for the end of the Iraq war and helped lead to “exit strategies” (Hasian Jr. 2012, p. 190).

If the public sentiment is that Jacinda Ardern was right to call for removal of the Christchurch mosque terrorist shootings footage, might the same arguments undermine Chelsea Manning’s and Julian Assange’s right to publicise a much shorter video documenting the killing of 12 civilians from a helicopter gunship? The question, as Rusbridger (2019) poses it, is: “Was it in the public interest that the world should have eventually seen the raw footage of what happened?”. It may be relatively easy to justify access to Daesh footage as helping persuade western audiences that the organization is murderous, inhumane, and barbaric, thereby supporting military intervention (NATO 2015). That end may be argued as justifying those means. But trying to justify which media is widely publicised and which is not on the basis of ‘motivation’ for posting content is not a sound foundation for effective, unambiguous, enforceable regulation.

In a final example, from 2014, The Australian newspaper controversially published a front-page image of a seven-year-old Australian boy holding the head of a slain Syrian soldier given to him by his father. This was a touch paper for discussion about homebred terrorism in Australia (Klausen 2015). These cases highlight different aspects of what may or may not be socially responsible, what is or is not a defensible way to deal with media access to coverage of life and death in violent scenarios.

The above three case studies show the complexity of mandating digital platforms’ adoption of CSR in deciding what constitutes good corporate digital citizenship. Is nuance possible? Judgement calls demand extraordinarily complex decision making to (say) justify the screening of an Apache helicopter attack ‘in the public interest’, but suppression of the Christchurch shootings under the same rationale. Such nuance goes to the heart of emerging understandings of CSR in support of what constitutes responsible practices by digital platforms.

CSR and Digital Platforms: Complexity or child’s Play?

Prior to digitisation, organisations may have had time for decision-making around what is and what is not publishable in the public interest. In contemporary contexts, such decisions need to be made instantaneously, and are generally delegated to algorithmic computation. But can algorithms identify pro-liberal democratic priorities?

Western publics have focused on regulators’ intentions to require the digital platform ecosystem to use its technology—from artificial intelligence, facial recognition software, biometrics, big data, machine learning, targeted communications, social media commentary, etc.—to make decisions in the public interest. Global discussions around this end include the General Data Protection Regulation in the EU, US Democrat Senator Elizabeth Warren’s suggested breaking up of the tech giants, President Biden’s recent assault on big tech’s “anti-competitive practices” (Paul 2021), and the German government’s legal measures against social media platforms that fail to take down hate speech, fake news, and defamatory content within 24 hours of it being posted. These are all battles over public values and competing social, economic and cultural interests.

This chapter has considered a range of critical incidents to draw attention to the issues raised by CSR in relation to digital platforms. These platforms are not rogue operators but neither are they entirely aligned with what an informed public might see as the ideal of supporting liberal democracies. Regardless of their influence in cultural and communication contexts, big tech companies are corporations run for a profit and designed to extract the greatest possible value from the workings of the ‘free market’ in late-capital societies.

What changes would platform media need to make to ‘take responsibility’ in the digital landscape? Evidence for effective intervention strategies, mainly from the EU, urges that people support new forms of CSR to enable an enhanced vision for digital citizenship that benefits both individuals and the societies in which they live. The remaining challenge is to embody democratic society’s ideals, civic responsibility, and accountability under stakeholders’ co-regulation and the rule of law. That is a possible way to combine a free market economy with an end to the unbridled commodification of citizens’ data.

Joining Facebook, or using Google, costs people the data they use and produce. Implicitly, users agree to be monitored, but might it be possible to change this situation? Gillespie argues that:

these platforms not only host that content, they organize it, make it searchable, and in some cases even algorithmically select some subset of it to deliver as front-page offerings, news feeds, subscribed channels, or personalized recommendations. In a way, those choices are the central commodity platforms sell, meant to draw users in and keep them on the platform, in exchange for advertising and personal data. (2018, p. 210)

The rights of users should be taken on board in stakeholder approaches to platforms’ performance of CSR. Current regulations often construct digital platforms as a single category, such as communication, media, and e‐commerce, rather than capturing different digital platforms’ heterogeneity (Nooren et al. 2018). Furthermore, some large platforms are conglomerates of interconnected platforms (Nooren et al. 2018) with diverse characteristics. When Facebook bought Instagram, for instance, it was not just buying Instagram; it was closing down a potential competitor. Smyrnaios (2018) shows how platforms use vertical integration to support an internet oligopoly: “well positioned throughout the [value] chain, either through mergers or acquisitions, stock purchases, or exclusive and privileged partnerships with companies that are upstream or downstream of their core business” (p. 91).

Hard to measure benefits, such as the quality and diversity of services and products, require consideration (Coyle 2019). As Furman et al. (2019) believe:

A pro-competition approach will provide a swifter and more proportionate means of addressing the competition challenges posed by the tendency of many digital markets to tip towards one or two large players. The introduction of a principle-based framework, developed in collaboration with the relevant players, is likely to be better suited than ex post enforcement to dealing with new and evolving practices in fast-moving digital markets. The presence of a stable and predictable framework would also provide welcome certainty to platforms on the rules of the game for operating in these markets. (pp. 123–124)

Takedown of child sexual abuse images and some aspects of violent and terrorist-related activity might be easily agreed as core business in western democracies. There is less consensus, however, on what constitutes hate speech, misinformation and tolerable forms of political debate even where it may be offensive and polarising; upon what is newsworthy and in the public interest, and what is not; and who or what should determine the boundary between these (Gillespie 2017). Addressing the ‘regulatory imbalance’ between traditional media and digital platforms (Flew et al. 2020), as reflected in the ACCC’s Digital Platforms Inquiry (ACCC 2019), may offer one form of resolution. But patterns of information circulation and public use of digital media might suggest regulating digital platforms like news media agencies. As the ACCC (2019) notes:

Digital platforms actively participate in the online news ecosystem, performing several of the same functions as news media businesses. This means that digital platforms are considerably more than mere distributors or pure intermediaries in the supply of news content in Australia. Despite this, virtually no media regulation applies to digital platforms in comparison with some other media businesses. (p. 166)

Western publics’ ‘right to know’ requires a nuanced balance of competing interests. Applying patterns of regulation, legislation and enforcement, and treating digital platforms like news media agencies, will potentially require digital platforms to pay more attention to CSR. In the case of the image of the seven-year-old Australian boy holding the head of a dead Syrian, The Australian was required to account for its decision to publish. Given that they contravened regulations and social norms, the paper had to advance an argument as to why publication was in the public interest. In news media contexts, the professional and ethical codes defining best practice play a crucial role in supporting responsible journalism (Donovan and Boyd 2021). Holding digital platforms to the same account as publishers and news agencies may support their more robust engagement with CSR.

Regulation (including self- and co-regulation), legislation and enforcement are all required if the platforms are to change their practices. Such changes will help make platform media take responsibility in the digital landscape, supporting a more constructive, pro-social engagement with information, data and knowledge. Platforms can and should be held responsible for upholding individuals’ rights. Emerging understandings of CSR in the digital realm support improved operating practices on the part of tech platforms; but also by national and international agencies, and by regulators.