Keywords

5.1 Introduction

The high cost of maintaining internal IT systems that are scalable, robust, and fast enough to keep pace with the speed of business, has ushered in the era of cloud computing services. Global cloud market revenues are predicted to increase from U$180b in 2015 to U$390b in 2020, attaining 17% annual average growth (Forbes 2017). All indicators for the growth of cloud computing, such as data traffic, the number of data centers or the amount of cloud service providers (CSPs) are also predicted to grow exponentially (Cisco 2018a). Whether applied to enterprise management systems, email systems or mobile computing, cloud computing both increases efficiency and reduces information technology costs by allowing on-demand access to a shared pool of computing resources that can be rapidly provisioned and released with minimal management effort (Mell and Grance 2011). However, although cloud computing’s benefits are tremendous, security and privacy concerns continue to be the primary obstacles to wide adoption (CSA 2009).

Cloud computing presents particularly challenging issues for privacy as it involves the dispersal of data across servers in geographically dispersed locations that often cross national and legislative boundaries, together with shared use of resources, making privacy incidents difficult to determine and detect (Ren et al. 2012). This was evidenced by a recent case in 2019, when Rubrik, the cloud data management giant, exposed a large cache of customer information improperly stored in an Amazon Elasticsearch database (Techcrunch 2019). With the rise in high-profile privacy incidents, such as Cambridge Analytica in 2018, and Capital One’s data breach on AWS (Fortune 2019), cloud service customers (CSCs) have also become more aware of the potential risks associated with the processing of data outside the traditional on-premise models. There are additional tensions arising from the costs of detection, prevention and/or remediation of privacy incidents versus the value that can be derived from data services (Acquisti 2008; Chen et al. 2012).

Governments, regulators and policymakers respond to such incidents by enhancing the specificity and stringency of compliance regulation, however the pace of change in cloud computing and the exponential growth in data is fast outpacing the legislative lifecycle. Instead of focusing on regulation as a response, focusing on privacy from a non-regulatory perspective may yield more sustainable solutions for CSPs to address the balance of these tensions more effectively. Reflecting this—in this chapter, we draw on control theories of privacy (Fried 1984; Moor 1997) to explain how CSPs comply with privacy law and exact power over data and systems, and we draw on theories of procedural justice (Allan and Tyler 1988) to explain how CSPs can ‘loosen the leash’ to enable the CSC have more control.

It would be highly complex to map cloud computing issues across the full panoply of regulatory privacy architectures, such as the California Consumer Privacy Act (CCPA), the General Data Protection Regulation (GDPR), the United Nations Declaration of Human Rights (UDHR), the Health Insurance Portability and Accountability Act (HIPAA) etc. However, given the broad reach of GDPR (it jurisdictionally applies to all EU CSPs, and to any non-EU CSP processing data of EU residents, or to any data processing within an EU territory) and its position as the strongest data protection regime in the world (Consumers-International 2019), we focus on GDPR in this chapter as the common reference for privacy regulation.

Applying these concepts of control and justice to privacy, we present a proposed privacy orientation framework describing the key privacy orientations of CSPs. This framework can help explain different approaches to privacy, and to understand their implications. Our framework extends research from Greenaway et al. (2015) to posit a new dimension to privacy orientations, which we call the Philanthropic Privacy dimension, where organizations undertake privacy activities with the aim of resolving privacy as a social issue. This privacy dimension is important, as organizations taking actions beyond their corporate obligations have been found to experience less privacy incidents (Accenture and Ponemon 2015).

The remainder of this chapter is organized as follows: in the next section we discuss the tensions arising from privacy, through the lens of control and justice. Following that section, we provide a brief overview of privacy risks in the CSP environment and outline the boundaries of privacy responsibilities within cloud computing models. We subsequently describe the development of our privacy orientation framework. We conclude this chapter with proposals for application of the framework and suggestions for further research.

5.2 Privacy as a Control or Justice Behavior

Dinev (2014) suggests that societal discourse has come to equate privacy and information privacy, so the terms are used here interchangeably. Information privacy is frequently described as a multidimensional concept that is dependent on context (Culnan and Williams 2009; Xu et al. 2012). The multiple dimensions affecting privacy being harm, protection, provision, and scope (Mulligan et al. 2016) and the facets shaping context being information sensitivity, industry sector, political sector and technological applications (Smith et al. 2011).

For over two decades, theories of control or theories of justice have been used to explore privacy challenges. Control theories, such as social contract theory, have frequently been applied to explore privacy challenges between organizations and consumers (Martin 2012, 2016; Wright and Xie 2019) as too have justice theories e.g. procedural and distributive justice (Ashworth and Free 2006; Culnan and Armstrong 1999). However, with the exception of Greenaway et al. (2015) no other study has combined these theoretical concepts to provide a balanced view of the privacy tensions arising between CSPs and CSCs.

5.2.1 Privacy as Control

While there is no single concept of information privacy that crosses all disciplines (Smith et al. 2011) ‘control over personal information’ is a common theme across many privacy studies (Belanger and Crossler 2011; Belanger et al. 2002). Control can refer to the ‘controls’ used to manage privacy (Belanger et al. 2002) or refer to the dynamic of ‘power’ over data (Johnson 2009). Both CSCs and CSPs implement ‘controls’ to manage privacy (Belanger et al. 2002) in the form of privacy enhancing tools, network access controls, authorization and authentication controls, privileged identity management controls etc. When privacy controls fail, a privacy incident is said to occur (a privacy incident is defined as the loss of control, compromise, unauthorized disclosure, unauthorized acquisition, or any similar occurrence where an authorized user accesses or potentially accesses personal information (DHS 2017, p. 8)). Johnson (2009) on the other hand refers to control as an organization’s need for power over information, while the consumer has to balance the good achieved through information processing against their need for privacy.

Providing the CSC with increased control over their data can result in the CSP not being able to maximize data storage efficiency, require the implementation of costly technical tools by the CSP, or necessitate costly legal indemnification of risk away from the CSP. Greenaway et al. (2015) describe this tension as the pursuit of interests such as profitability or market share, at the expense of those who provide information or pay for a service. This need for a CSP to dominate control is enshrined in the concept of information ownership, and it is this concept of control and power that we apply in this chapter, as reflected by Xu et al. (2012) who distinguish privacy control between individual and organization, with the organization increasingly becoming the ‘control agent’.

However, privacy is not solely about control but also about information being authorized to flow to specific agents at specific times (Moor 1997). Moor (1997) argues that in a highly digital culture it is simply not possible to control all information. Therefore, he argues, the best way to protect privacy is to ensure the right people have access to relevant information at the right time, giving individuals as much control over their data as realistically possible (labelling his theory the “control/restricted access” theory of privacy). With the emerging complexity of organizational networks, such as those constructed by CSPs, such levels of control are not realistically possible. Greenaway et al. (2015) classify organizations who provide little control to their consumers as ‘low control’. However, aligning to the concepts of control that we draw on in this paper, we would classify these organizations as ‘high control’, as they essentially dominate ‘power’ over the consumers’ information.

5.2.2 Privacy as Justice

The power-responsibility equilibrium (PRE) model, developed by Davis et al. (1980), states that power (which Laczniak and Murphy (1993) define as ‘the ability to control’) should be in equilibrium, where the partner with more power also has the responsibility to ensure an environment of trust and confidence. If an organization chooses a strategy of great power and less responsibility, it might benefit in the short-term but will lose control in the long-term e.g. from increased regulation (Caudill and Murphy 2000). Organizations can re-balance the control-equilibrium by returning a level of control to the consumer. For instance, in 2019, Apple updated its privacy website with a revised declaration of the company’s position on privacy.Footnote 1 This included a new section titled ‘Control’ with a subsection titled ‘Take Charge of Your Data’ which outlines:

To give you more control over your personal information, we are rolling out a set of dedicated privacy management tools.

This relinquishing of control by an organization is enshrined in the concept of information stewardship. Information stewardship implies that no matter what an organization does with stakeholders’ information (for example, selling it to third parties) the organization always remains responsible and retains oversight of the processing of that information (Rosenbaum 2010). Apple offer opportunities to their consumers that return a level of transparency and control to the consumer—by providing clear and simple explanations of what privacy options are available and how to implement them—they provide opportunities to their consumers that score highly on the Fair Information Practice Principles (FIPPs) scales. The principles of FIPPs are transparency, preference, purpose, minimization, limitation, quality, integrity, security, and accountability (DHS 2008, p. 4). Information privacy and FIPPs have been linked to procedural justice (Culnan and Bies 2003), where an organization’s control over personal information is noted as ‘just’ when the information owner is vested with the key principles of FIPPs (Culnan and Bies 2003; Greenaway et al. 2015). Organizations demonstrating behaviors that exceed compliance obligations to protect information entrusted to them, view themselves as information stewards rather than information owners and demonstrate a ‘culture of caring’ with regard to privacy (Accenture and Ponemon 2015). These types of organizations have also been found to experience less privacy incidents (Accenture and Ponemon 2015).

5.3 Privacy and Cloud Computing

Although cloud computing enables reduced start-up costs, reduced operating costs and increased agility, its architectural features also raise various privacy concerns which are shared across several key stakeholders (Takabi et al. 2010) such as the CSP, the CSC, the consumer/data subject and application developers. Organizations are increasingly concerned about the risks of storing their data and applications on systems that reside outside of their on-premise data centres (Chen et al. 2010). ENISA (2009) define these risks as:

  • privacy risks for the CSC e.g. non-compliance to enterprise policies and legislation, loss of reputation and credibility and being forced or persuaded to be tracked, or give personal information against their will (e.g. by governments)

  • privacy risks for implementers of cloud platforms e.g. exposure of sensitive information stored on the platforms (potentially for fraudulent purposes), legal liability, loss of reputation and credibility, lack of user trust and take-up

  • privacy risks for providers of applications on top of cloud platforms e.g. legal noncompliance, loss of reputation, ‘function creep’ using the personal information stored on the cloud, i.e. it might be used for purposes other than the original cloud intention

  • privacy risks for the data subject e.g. exposure of personal information

However, unlike the traditional on-premise model of computing, responsibility for these privacy risks in the cloud computing model is shared between the CSC and the CSP, and the balance between the two responsibilities changes between cloud service models. This relationship is known as the shared responsibility model, and it is the basis for how modern cloud security and privacy operates with each service model differing in the amount of control offered to the CSC (Tripwire 2018). Haeberlen (2010) describes how different cloud service models affect the ways that privacy responsibilities are shared between CSPs and CSCs:

  • IaaS—the CSP is responsible for the implementation and management of privacy controls only within the physical infrastructure. The CSC is responsible for all other aspects of privacy.

  • PaaS—the CSP is responsible for IaaS. However, CSCs and CSPs are jointly responsible for ensuring appropriate privacy controls are implemented within the applications deployed on the PaaS environment.

  • SaaS—the CSC has limited control over privacy and security. CSCs will generally maintain responsibility for managing identity and access management controls to ensure minimum permissions are assigned to roles. The CSP is responsible for ensuring all other privacy controls are in place.

As a CSC moves from on-premise models to cloud service models, they lose control over their data, including control over the privacy of that data. Although the shared responsibilities model assumes a level of transparency and control for the CSC, CSPs have traditionally lacked transparency regarding their privacy policies, strategy, service, thresholds etc. making it difficult for CSCs to objectively perform evaluations and risk assessments for a CSP service (Cruzes and Jaatun 2015). The implementation of ethical principles such as those in FIPPs not only mitigate the key privacy risks associated with cloud computing (Pearson 2009) but also offer the CSP an opportunity to rebalance the control-justice equilibrium for the CSC.

Many CSPs including Amazon, Microsoft, and IBM do offer simple breakdowns of performance metrics and responsibilities etc. However, some CSPs (e.g. SAAS model CSPs such as Salesforce or Workday) do not clearly define these sufficiently (Prüfer 2018). In a recent survey of IT decision makers (Netapp 2016) 35% believed responsibility for data sits with the CSPs, while 3% did not know who would be responsible. GDPR is very clear that responsibility for personal data lies firmly with the data controller (GDPR, Article 24). Under GDPR, the CSC is responsible (regardless of cloud computing model) for ensuring their own compliance requirements are handled effectively by the CSP and ensuring these requirements are adequately reflected in legally binding contractual agreements (called Data Processing Agreements, or DPAs) with the CSP. However, if a CSP is not transparent for instance, about which core IT services they themselves outsource to sub-processors, the CSC is unable to properly evaluate risks. In some cases, it may be difficult for the CSC (in its role as data controller) to assess the adequacy of the CSP’s data handling practices to ensure that their data is handled in a lawful way. This problem is exacerbated in cases of multiple transfers of data e.g. between federated clouds. This lack of transparency is linked to decreased levels of trust in the CSP, is a key barrier for the adoption of cloud services (Del Alamo et al. 2015) and is associated with lack of accountability (Pearson 2009; Haeberlen 2010).

Finally, CSCs often overlook the on-premise privacy measures that traditional applications rely on such as on-premise firewall configurations that block logins from specific locations (such as embargoed countries), intrusion prevention systems, behavior analytics platforms (detecting insider threats), log management and alerting solutions (Oracle 2017). Since these and other measures protect privacy in applications in the enterprise campus, they are often taken for granted in the context of any one particular application and the responsibility for their installation and upkeep falls squarely on the CSC (Oracle 2017).

Alongside issues of transparency, responsibility and accountability, Abed and Chavan (2019) highlight a number of universal privacy issues facing CSCs, when considering cloud computing adoption, namely; the institutional obligation for disclosure (to governments), breach and incident disclosure, data accessibility and retention, and physical storage location):

  • Institutional obligation for disclosure to Governments: Studies have shown that CSCs considering cloud adoption are concerned that data outsourced to the CSP can be accessed by others, notably public authorities with legitimate or illegitimate objectives as well as legal and illegal private actors (August et al. 2014). For instance in 2018, the Clarifying Lawful Overseas Use of Data Act (CLOUD Act, House of Representatives Bill 4943, 2018) enabled law enforcement agencies to access data processed by US-based companies, regardless of whether the servers were located in the US.

  • Breach and incident disclosure: GDPR mandates that a privacy incident/breach be reported within 72 hours of its discovery. However, when the privacy incident/breach occurs in the CSP environment, it is very difficult for the CSC to have transparent discovery and disclosure arrangements in place. The breach disclosure requirements are defined under Article 28 of the GDPR and need to be incorporated into the DPA with the CSP. This contract also needs to clearly define the balance of liability in the event of a data breach.

  • Data accessibility and retention: Guaranteeing availability of cloud data when migrating from one CSP to another has become a primary concern for CSCs (Xue et al. 2017). In the CSP environment, personal data accessibility, retention and deletion become fundamental requirements to prevent CSCs being ‘locked in’ to a given CSP.

  • Physical Storage Location: CSPs may store data in multiple geographically dispersed jurisdictional locations. Often this presents a challenge to identify which legislation takes precedence where laws conflict, particularly where there is conflict between the laws applying to the physical location of a CSP data centre and the physical location of the CSC (Abed and Chavan 2019).

5.4 The Privacy Orientation Framework

In the early years of privacy, organizations understood their responsibilities toward privacy to be legal and financial responsibilities. In the 70’s discretionary frameworks such as FIPPs emerged—combining privacy standards with due process, consumer rights, and equality protections (Westin 2003). From the turn of the century Westin (2003) suggests that privacy became a first-level social and political issue in response to 9/11, the Internet, the cell phone, the human genome project, data mining, automation of government public records amongst others. In the last decade organizations began to respond to these fundamental changes to concern for privacy, with initiatives that exceeded their legal, financial and ethical responsibilities—ranging from privacy-by-design standards, developing open privacy standards, to collaborating with privacy advocacy groups. Given their association with justice and improved privacy protection, our framework was concerned with those privacy behaviors exceeding legislation. Much of the literature investigating privacy beyond legislation (Pollach 2011; Allen and Peloza 2015) explored privacy as a Corporate Social Responsibility (CSR). McWilliams and Siegel (2001) define CSR as those actions that appear to further some social good, beyond the interests of the organization and ‘beyond that which is required by law’.

Carroll’s model of CSR is the most commonly known model for CSR (Visser 2006). Carroll (1979) identified four pillars of CSR (financial responsibilities, legal responsibilities, ethical responsibilities, and philanthropic responsibilities). Privacy is a financial responsibility as organizations can be fined significant sums of money for non-compliance. Privacy is also a legal responsibility as privacy legislation (e.g. the California Consumer Privacy Act (CCPA) or the GDPR) mandates strict governance over the processing of personal data. Carroll (1998) suggested that privacy not only is a legal and financial responsibility but is also an ethical responsibility, as legislation lags behind ethics, and morality comes into play. Privacy also meets Mason’s (1995) test of what constitutes an ethical problem i.e. whenever one party in pursuit of its goals engages in behavior that materially affects the ability of another party to pursue its goals. In 2018, the then EU Data Protection Supervisor (Giovanni Buttarelli) argued that in order to address this ethical component of privacy, organizations should not overly rely on bare compliance with the letter of law, and should adopt a ‘duty of care’ for consumer data (Buttarelli 2018). CSR activities are most appropriate where existing legislation requires compliance with the spirit as well as the letter of the law and where the organization can fool stakeholders through superior knowledge (Mintzberg 1983).

Finally, privacy can be described as a philanthropic responsibility, where an organization’s privacy behaviors demonstrate a ‘duty of care’ (Buttarelli 2018) towards data owners and society that exceeds compliance. Philanthropy presents itself in many forms e.g. cash contributions or employee commitments (Smith 1994) and is often explained by social exchange theory (Emerson 1962) where corporations use philanthropy to expand the scope of their business initiatives, influence governments, and position themselves as influential leaders (Jung et al. 2016). Both Husted (2003) and Stannard-Stockton (2011) suggest three classifications of philanthropy; (1) “check-book philanthropy” i.e. making cash contributions to a cause; (2) in-house projects and philanthropic investments; (3) strategic philanthropic collaboration between organizations and non-corporate partners. True philanthropy matches the resources of the giver with the needs of the recipient through a socially beneficial relation that is mobilized and governed by a force of morally armed entreaty (Schervish 1998).

As we could find no reference in the literature to philanthropic privacy, based on definitions of CSR from McWilliams and Siegel (2001) we describe it as: ‘Any privacy behavior(s) exceeding legislative requirements, that furthers privacy as a societal good, beyond the interests of the organization’. Philanthropic privacy behaviors would therefore include behaviors such as advising on government policy, developing open privacy standards or tools, exceeding privacy laws for employees, even lobbying for strengthened privacy on behalf of the consumer or society. Whilst traditional lobbying often aims at shaping rules for privacy in the best interest of the organization, some organizations may try to shape the rules in a way that favors society’s needs for privacy. The inclusion of the philanthropic dimension is important, as organizations taking privacy actions beyond their corporate obligations have been found to experience less privacy incidents (Accenture and Ponemon 2015). It therefore seems reasonable to suggest that the philanthropic privacy dimension may help identify privacy behaviors that could help strengthen privacy protection effectiveness.

Although privacy behaviors ‘beyond that which is required by law’ may demonstrate the differentiating behaviors of justice that this chapter is keen to explore, it is important to highlight that this concept of ‘beyond that which is required by law’ differs from one jurisdiction to another. For instance, the GDPR mandates that organizations provide transparency, choice, purpose and notice to the data subject. Many other jurisdictions, particular large sectors of the US, do not mandate the provision of these privacy behaviors through regulation but through industry self-regulation and discretionary codes such as FIPPs. Thus, choice for example, would be considered a legal requirement within GDPR, and in certain US contexts an ethical requirement. This is an important consideration for any future application of our framework, as an evaluation of the local privacy landscape for a given organization would first be required, in order to determine their legal minimums.

5.4.1 Framework Dimensions

Greenaway et al. (2015), combine control and justice theory to form a privacy orientation which they call Company Information Privacy Orientation (CIPO). Their research forms the starting point for the construction of the privacy orientation framework we develop in this chapter. Greenaway et al. (2015) suggest that an organization’s privacy orientation is founded on three dimensions namely: information management (i.e. how an organization uses information for profit), legal and ethical dimensions. These three dimensions are underpinned by several theoretical components: stakeholder theory, stockholder theory, social contract theory, information processing theory and cognitive categorization theory (Greenaway et al. 2015). Through triangulation of their three CIPO dimensions with the four pillars of CSR from Carroll’s (1979) CSR pyramid of responsibilities (economic, legal, ethical, and philanthropic), we extend the CIPO framework with the addition of the philanthropic dimension. Triangulation enables a systematic search for convergence among sources of information to form new or revised themes/categories representing the state of knowledge for a given subject (Denzin 1978; Yardley 2008). See Table 5.1 for the Privacy Orientation Framework dimensions, adapted from the Greenaway et al. (2015) CIPO framework, extended to include details for the Philanthropic dimension.

Table 5.1 Privacy orientation framework dimensions including philanthropy

A CSP may demonstrate a combination of both control-based behaviors and justice-based behaviors in varying levels. We therefore position control and justice as two continua, where four privacy orientations emerge, namely; ‘low-justice, low-control’ orientations (we call CSPs in this orientation ‘Compliers’); ‘low-justice, high-control’ (we call CSPs in this orientation ‘Integrators’); ‘high-justice, high-control’ (we call CSPs in this orientation ‘Citizens’); and ‘high-justice, low-control’ (we call CSPs in this orientation ‘Warriors’). We summarize our privacy orientation framework below in Fig. 5.1.

Fig. 5.1
figure 1

Privacy orientations framework

5.4.1.1 Compliers

Complier CSPs focus on privacy risk mitigation. CSPs in this orientation demonstrate privacy behaviors that measure low on justice and low on control. This orientation sets compliance as the goal, with privacy governance assigned to functional management. Their privacy policies and practices are often centred on compliance with laws and industry standards. These organizations report little more than privacy metrics mandated by privacy. They may undertake lobbying towards privacy but only where beneficial to the organization. They have no real philanthropic perspective with regard to privacy. Compliers tend to provide rudimentary transactional services (for instance Cloudways and Digital Ocean). They will typically offer security and privacy features at an additional cost, or like Amazon AWS, allow the CSC to reconfigure private buckets to be non-private. Risk-Managers rarely exceed legislative minimums.

In 2019, Capital One (using AWS as their CSP) suffered a breach impacting 100 m customers’ financial data including social security numbers (Business Insider 2019). The AWS server used by Capital One was vulnerable to a well-known attack (called server-side request forgery [SSRF]). AWS denied liability (as their contracts indemnified them) however—without being legally required to do so, AWS’s largest competitors (Google and Microsoft) had already addressed the threat of SSRF attacks two years previously.Footnote 2

5.4.1.2 Integrators

Integrator CSPs offer robust compliance to privacy legislation whilst integrating society’s increasing expectations to address privacy as a social responsibility. CSPs in this privacy orientation demonstrate privacy behaviors that measure low on justice and high on control. Relinquishing control (by offering increased justice behaviors) to their CSCs is not important to these organizations, as they require widespread access to data and systems in order to minimize costs and maximize profit. CSPs in this orientation may actively reflect on ways they can use social issues such as privacy to gain competitive advantage. For instance, privacy rights for employees, their families, and local communities may be supported. The CSPs objective in this orientation is to mitigate the erosion of economic value in the medium term and to achieve longer-term gains by integrating responsible privacy practices into their daily operations (Zadek 2004). These CSPs not only want to monetize their cloud service, but they also want to maximize data value. Although CSPs in this orientation may appear to demonstrate philanthropic privacy—for example by producing white papers, open standards, open tools etc. this philanthropy towards privacy is not always a ‘felt’ value.

For instance, Google Cloud’s published privacy commitments (Google 2020) state that “we believe that trust is created through transparency, and we want to be transparent about our commitments and what you can expect when it comes to our shared responsibility for protecting and managing your data in the cloud”. Yet in 2019, Google were fined €50m by the French data protection authority for lack of transparency in their privacy policies (CNIL 2019).

5.4.1.3 Citizens

Citizen CSPs assume a citizenship role, leading privacy issues and transforming their business models to achieve this objective. Citizen CSPs demonstrate privacy behaviors that measure high on justice and high on control. They openly acknowledge their new roles and responsibilities towards society, and recognize how responsibilities towards private, public, and social sectors have become interdependent (Latapi-Agudelo et al. 2019). The Citizen orientation addresses privacy strategically, where privacy programs are focused on building and maintaining sustainable relationships, and often exceed legislative obligations. These organizations incorporate privacy as a value held strongly by the organization, associating their privacy behaviors with other strong values such as trust and integrity (Mirvis and Googins 2006). CSPs in the Citizen privacy orientation broaden their agenda by expanding their privacy concerns and deepen the involvement of top management in the leadership addressing privacy issues. Organizations form long-term alliance and partnerships with stakeholders in order to drive change in privacy issues. Caldwell et al. (2010) refer to this leadership as a ‘stewardship role’ which generates commitment from other stakeholders and organizations in order to drive change in several key privacy issues. CSPs in this orientation consider privacy to be a compliance obligation however they also consider privacy to be strategic, where privacy is used to build and maintain sustainable relationships, and often exceeds legal minimums—reflecting philanthropic privacy.

Microsoft for instance established an online trust centre to respond to queries on privacy for all its products, particularly on Office 365. They also provide tools to simplify responding to Subject Access Requests mandated by GDPR (Microsoft 2018a). Microsoft has also extended the rights available to Europeans under the GDPR, to all its consumers, noting that “GDPR establishes important principles that are relevant globally” (Microsoft 2018b). IBM, as another example, were among the first companies to sign the EU Data Protection Code of Conduct for Cloud Service Providers (IBM 2017).

5.4.1.4 Warriors

Warrior CSPs highlight privacy as a societal issue and may even revolt against laws in order to support it. CSPs who are positioned in this orientation demonstrate privacy behaviors that measure high on justice, however unlike those of the Citizen, their behaviors may also measure low on control. In terms of philanthropic privacy, Warrior CSPs may lobby government for increased privacy for individuals, knowing it will increase organizational costs, reduce shareholder returns and/or marginalize certain stakeholders. They may stake claims to privacy, to such a degree that they will breach laws if they feel that those laws compromise social or democratic freedoms.

Apple for example, in 2015 and 2016, refused to comply with, and challenged, at least 12 orders issued by the FBI compelling Apple to enable decryption of phones involved in criminal investigations and prosecutions. It could be argued that Apple monetize ‘privacy’ as part of their brand, however in the US they did not garner widespread support for their decision and at that time 51% of American smartphone users were against Apple’s decision, while only 38% supported Apples stance (Pew 2016). Warriors may demonstrate less control than those in the Citizen orientation, however in its place they introduce strong privacy governance and accountability structures. In Apple for instance, any collection of customer data requires sign-off from a committee of three “privacy czars” and a top executive (Reuters 2016).

Cisco, as another example, recognized that their systems could be used in a manner contrary to their values and state “Cisco technologies are used by government agencies to promote public safety, but the same technology can be used for surveillance that would violate individuals’ privacy”. (Cisco 2018c). Cisco issued a position statement opposing government backdoors and opposing attempts to prohibit public disclosure about new surveillance capabilities demanded by governments (Cisco 2018b). Cisco therefore retain a high level of control to ensure that their product cannot be misused but also balance this control with a commitment to “build our products on the open, global standards we believe are critical to overcoming censorship, protecting privacy, and keeping the world connected” (Cisco 2018c). Cisco also undertakes several philanthropic privacy activities such as the Cisco Privacy Maturity Benchmark Study and sponsoring the National Cybersecurity Alliance’s Data Privacy Day (Cisco 2018c).

5.5 Conclusion

Researchers, practitioners, and policymakers need a better understanding of how and why organizations differ in their treatment of consumer privacy. Belanger and Xu (2015) advocate that privacy researchers conduct qualitative interpretive research about relationships between privacy antecedents and outcomes. Using this framework, such research could be achieved by categorizing and weighing levels of control and justice demonstrated by a CSP’s privacy behaviors and positioning them as one of the four orientations in the framework. The framework also presents opportunities for such future research to empirically examine the effectiveness of one orientation over another, or the potential impact of an organization’s orientation on other variables such as privacy protection effectiveness, cybersecurity behaviors, or privacy strategies. The framework can also be applied in different contexts—comparing privacy orientations for CSPs in different countries, of different sizes, or with different market dependencies on data.

Being able to position a CSP’s privacy orientation allows the CSP to establish baselines within their industry and to determine if certain orientations provide more robust security and privacy protection, where privacy incidents may be reduced. At the same time, the framework challenges CSPs to articulate their ethical, financial, legal, and philanthropic strategies. Moreover, identifying their privacy orientation will assist CSPs to better align their actual privacy programs with the CSCs top concerns (Chan 2003). If CSPs can use their privacy orientation to enhance and make more effective their privacy provisions, then the wider stakeholder community, particularly the CSC, and ultimately the data subject, may benefit. We hope the framework provides a helpful foundation for future theoretical and empirical work.