Privacy, Security and Trust in Cloud Computing

Part of the Computer Communications and Networks book series (CCN)


Cloud computing refers to the underlying infrastructure for an emerging model of service provision that has the advantage of reducing cost by sharing computing and storage resources, combined with an on-demand provisioning mechanism relying on a pay-per-use business model. These new features have a direct impact on information technology (IT) budgeting but also affect traditional security, trust and privacy mechanisms. The advantages of cloud computing—its ability to scale rapidly, store data remotely and share services in a dynamic environment—can become disadvantages in maintaining a level of assurance sufficient to sustain confidence in potential customers. Some core traditional mechanisms for addressing privacy (such as model contracts) are no longer flexible or dynamic enough, so new approaches need to be developed to fit this new paradigm. In this chapter, we assess how security, trust and privacy issues occur in the context of cloud computing and discuss ways in which they may be addressed.


Cloud computing Privacy Security Risk Trust 

1.1 Introduction

Although there is no definitive definition for cloud computing, a definition that is commonly accepted is provided by the United States National Institute of Standards and Technologies (NIST):

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. [1]

This shared pool of resources is unified through virtualization or job scheduling techniques. Virtualization is the creation of a set of logical resources (whether it be a hardware platform, operating system, network resource or other resource) usually implemented by software components that act like physical resources. In particular, software called a ‘hypervisor’ emulates physical computer hardware and thus allows the operating system software running on the virtual platform—a virtual machine (VM)—to be separated from the underlying hardware resources.

The resources made available through cloud computing include hardware and systems software on remote data centres, as well as services based upon these that are accessed through the Internet; these resources can be managed to dynamically scale up to match the load, using a pay-per-resources business model. Key features advertised are elasticity, multi-tenancy, maximal resource utilization and pay per use. These new features provide the means to leverage large infrastructures like data centres through virtualization or job management and resource management.

Cloud computing (or, more simply, ‘cloud’) provides a market opportunity with a huge potential both for efficiency and new business opportunities (especially in service composition) and is almost certain to deeply transform our information technology infrastructures, models and services. Not only are there cost savings due to economies of scale on the service provider side and pay-as-you-go models, but business risk is decreased because there is less need to borrow money for upfront investment in infrastructure.

The adoption of cloud computing may move quite quickly depending on local requirements, business context and market specificities. We are still in the early stages, but cloud technologies are becoming adopted widely in all parts of the world. The economic potential of cloud computing and its capacity to accelerate innovation are putting business and governments under increased pressure to adopt cloud computing-based solutions.

Although the hype around cloud tends to encourage people to think that it is a universal panacea, this is not the case and quite often promoters ignore the inherent complexities added by the cloud. There are a number of challenges to providing cloud computing services: the need to comply with local and regional regulations; obtaining the necessary approvals when data is accessed from another jurisdiction; some additional complexity in terms of governance, maintenance and liability inherent to cloud; and a perceived lack of trust in cloud services. Many chief information officers (CIOs) in large enterprises identify security concerns as the top reason for not embracing the public cloud more aggressively and not benefitting from associated cost optimizations [2, 3]. Added to this rather common concern from technical audiences is a growing concern from data subjects, consumer advocates and regulators about the potentially significant impact on personal data protection and the required compliance to local regulations [4, 5]. The Patriot Act—a US federal law that can compel the legal request of customer and employee privacy information—in particular causes fears about transferring information to the USA [6]. Cloud can exacerbate the strain on traditional frameworks for privacy that globalization has already started. For example, location matters from a legal point of view, but in the cloud, information might be in multiple places, might be managed by different entities and it may be difficult to know the geographic location and which specific servers or storage devices are being used. It is currently difficult to ascertain and meet compliance requirements, as existing global legislation is complex and includes export restrictions, data retention restrictions, sector-specific restrictions and legislation at state and/or national levels. Legal advice is required, transborder data flow restrictions need to be taken into account, and care must be taken to delete data and virtual storage devices when appropriate. Although often there is a focus on security, in fact the most complex issue to address is privacy.

Context is an important aspect, as different information can have different privacy, security and confidentiality requirements. Privacy needs to be taken into account only if the cloud service handles personal information (in the sense of collecting, transferring, processing, sharing, accessing or storing it). Moreover, privacy threats differ according to the type of cloud scenario. There is a low privacy threat if the cloud services are to process information that is (or is very shortly to be) public. That is why the New York Times mass conversion of scanned images to PDF in the early stages of the cloud, which was at the time often highlighted as a classic demonstration of the benefits of a cloud approach, was a good scenario for cloud computing. On the other hand, there is a high privacy threat for cloud services that are dynamically personalized, based on people’s location, preferences, calendar, social networks, etc. Even if the same information is involved, there may be different data protection requirements in different contexts due to factors including location and trust in the entities collecting and processing it. In addition, it should be borne in mind that there may be confidentiality issues in the cloud even if there is no ‘personal data’ involved: in particular, intellectual property and trade secrets may require protection that is similar to personal data and in some cases may benefit from practices and technologies developed specifically for ensuring appropriate personal data handling within the network of cloud service providers (which in this chapter we will refer to as a ‘cloud ecosystem’).

Opportunities are being created for some service providers to offer cloud services that have greater assurance and that employ mechanisms to reduce risk. These services might be more expensive than ones with minimal guarantees in terms of security and privacy, but in certain contexts and especially where sensitive information is involved, it is what is needed to foster trust in using such services while still allowing economic savings and other benefits of cloud computing. The potential can be very good: for small- and medium-sized enterprises (SMEs) in particular, greater security can actually be achieved via the use of cloud services than they have the expertise or budget to provide in-house. On the other hand, there are a number of potential pitfalls and complications, especially due to the global nature of business and the associated potential for increased data exposure and non-compliance with a matrix of different regulations, and these need to be addressed.

Overall, there is a paradigm change with cloud that can increase security concerns (especially loss of control, data integrity, data confidentiality and access by governments due to US Patriot Act and other legislation), resulting in complexity increasing along organizational, technical and regulatory dimensions. We shall consider these aspects further in this chapter.

The structure of the chapter is as follows:
  • Section 1.2 gives an overview of cloud computing deployment and service models.

  • Section 1.3 discusses the sometimes complex relationship between privacy, security and trust.

  • Section 1.4 describes privacy issues for cloud computing.

  • Section 1.5 describes security issues for cloud computing.

  • Section 1.6 describes trust issues for cloud computing.

  • Section 1.7 briefly discusses a number of approaches to addressing privacy, security and trust issues in cloud computing.

  • Section 1.8 provides a summary and conclusions.

1.2 Cloud Deployment and Service Models

Building on the explanation given in the previous section, cloud computing refers to the underlying infrastructure (which may be very complex) that provides services to customers via defined interfaces. There are different layers of cloud services that refer to different types of service model, each offering discrete capabilities. Apart from management and administration, the major layers are:
  • Infrastructure as a Service (IaaS): the delivery of computing resources as a service, including virtual machines and other abstracted hardware and operating systems. The resources may be managed through a service Application Programming Interface (API). The customer rents these resources rather than buying and installing them in its data centre, and often, the resources are dynamically scalable, paid for on a usage basis. Examples include Amazon EC2 and S3.

  • Platform as a Service (PaaS): the delivery of a solution stack for software development including a runtime environment and lifecycle management software. This allows customers to develop new applications using APIs deployed and configurable remotely. Examples include Google App Engine, and Microsoft Azure.

  • Software as a Service (SaaS): the delivery of applications as a service, available on demand and paid for on a per-use basis. In simple multi-tenancy, each customer has its own resources that are segregated from those of other customers. A more efficient form is fine-grained multi-tenancy, where all resources are shared, except that customer data and access capabilities are segregated within the application. Examples include online word processing and spreadsheet tools, customer relationship management (CRM) services and web content delivery services (Salesforce CRM, Google Docs, etc.)

These three are the main layers, although there can also be other forms of service provided, such as business process as a service, data as a service, security as a service, storage as a service, etc.

These layers form a kind of stack, as illustrated in Fig. 1.1. For example, in IaaS, consumers can deploy and run software, with a cloud service provider (CSP) controlling the underlying cloud infrastructure. In PaaS, consumers deploy (onto a cloud infrastructure run by a CSP) applications that have been created using programming languages and tools supported by that provider. In SaaS, consumers use CSPs’ applications running on a cloud infrastructure that is typically provided by another CSP. In practice, IT vendors providing cloud services often include elements from several layers.
Fig. 1.1

Cloud service models

Cloud computing has several deployment models, of which the main ones are:
  • Private: a cloud infrastructure operated solely for an organization, being accessible only within a private network and being managed by the organization or a third party (potentially off premise)

  • Public: a publicly accessible cloud infrastructure

  • Community: a cloud infrastructure shared by several organizations with shared concerns

  • Hybrid: a composition of two or more clouds that remain separate but between which there can be data and application portability

  • Partner: cloud services offered by a provider to a limited and well-defined number of parties

Cloud computing services use ‘autonomic’ or self-regulating technologies which allow services to react and make decisions on their own, independently of CSP operators and transparently to customers, based on preset policies or rules; autonomic processes might, for example, independently scale up service provision in reaction to a customer’s usage, or transfer data processing within a virtual machine from a physical server location in the USA to another in Japan, based on the comparative usage of the physical servers.

In most of these cloud computing models, multiple customers share software and infrastructure hosted remotely—a process known as multi-tenancy. Hence, one instance of software, and the physical machine it runs on, serves clients from different companies, although security mechanisms are used to provide a protected VM environment for each user. Therefore, cloud computing can be thought of as an evolution of outsourcing, where an organization’s business processes or infrastructures are contracted out to a different provider. A key difference is that with cloud computing, it can be difficult, or even impossible, to identify exactly where the organization’s data actually is. This is partly because CSPs may have server farms in several countries, and it may not be possible for the CSP to guarantee to a customer that data will be processed in a particular server farm, or even country. Amazon Web Services (AWS) and Google have multiple data centres worldwide, details of the locations of which are often confidential. Offshoring, a term traditionally used where business processes are relocated to a different country, is thus also seen as a common element of cloud service provision.

In addition, it is the case that just as their customers use cloud services to obtain variable amounts of service provision according to their needs over time (usually referred to as ‘scalability’), CSPs may themselves lease processing and storage capacity from other service providers to meet their own requirements. Thus, when a customer processes data using a CSP, that data may simultaneously reside in a jurisdiction outside that of both the customer and CSP, and on a third party’s computer systems.

From a legal and regulatory compliance perspective, several of the key characteristics of cloud computing services including outsourcing, offshoring, virtualization and autonomic technologies may be problematic, for reasons ranging from software licensing, and the content of service-level agreements (SLAs), to determining which jurisdiction’s laws apply to data hosted ‘in the cloud’ and the ability to comply with data privacy laws [7, 8]. For example, the autonomic aspect of cloud computing can pose new risks, namely, self-optimization and self-healing. Self-optimization grants a degree of autonomy in decision making, for example, automatically adapting services to meet the changing needs of customers and service providers; this challenges enterprises’ abilities to maintain consistent security standards. Self-healing allows CSPs to provide appropriate business continuity, recovery and backup, but it may not be possible to determine with any specificity where data processing takes place within the cloud infrastructure [9]. Autonomic aspects of cloud computing—like many of the other aspects mentioned above—are one of its assets but need to be tailored to be compliant with privacy and legal issues.

Before considering the privacy, security and trust issues associated with cloud computing in more detail, we analyse in the next section what these terms mean and how they interrelate.

1.3 The Relationship Between Privacy, Security and Trust

Privacy and trust are both complex notions for which there is no standard, universally accepted definition. Consequently, the relationship between privacy, security and trust is necessarily intricate. In this section, we explain some of the main elements of this relationship.

1.3.1 Privacy

At the broadest level (and particularly from a European standpoint), privacy is a fundamental human right, enshrined in the United Nations Universal Declaration of Human Rights (1948) and subsequently in the European Convention on Human Rights and national constitutions and charters of rights such as the UK Human Rights Act 1998. Since at least the 1970s, the primary focus of privacy has been personal information, and particularly concerned with protecting individuals from government surveillance and potential mandatory disclosure of private information in databases. A decade later, concerns were raised related to direct marketing and telemarketing, and, later still, consideration was given to the increasing threat of online identity theft and spamming. There are various forms of privacy, ranging from ‘the right to be left alone’ [10], ‘control of information about ourselves’ [11], ‘the rights and obligations of individuals and organizations with respect to the collection, use, disclosure, and retention of personally identifiable information’ [12] and focus on the harms that arise from privacy violations [13]. Another influence is Nissenbaum’s idea of privacy as ‘contextual integrity’, whereby the nature of challenges posed by information technologies may be measured. Contextual integrity binds adequate protection for privacy to norms of specific contexts that are essentially constraints on information flows, so that information gathering and dissemination should be made appropriate to the particular context [14, 15].

In the commercial, consumer context, privacy entails the protection and appropriate use of the personal information of customers and the meeting of expectations of customers about its use. For organizations, privacy entails the application of laws, policies, standards and processes by which personal information is managed. What is appropriate will depend on the applicable laws, individuals’ expectations about the collection, use and disclosure of their personal information and other contextual information; hence, one way of thinking about privacy is just as ‘the appropriate use of personal information under the circumstances’ [16]. Data protection is the management of personal information and is often used within the European Union in relation to privacy-related laws and regulations (although in the USA the usage of this term is focussed more on security).

In broad terms, personal information describes facts, communications or opinions which relate to the individual and which it would be reasonable to expect him or her to regard as intimate or sensitive and therefore about which he or she might want to restrict collection, use or sharing. The terms ‘personal information’ and ‘personal data’ are commonly used within Europe and Asia, whereas in the USA, the term ‘Personally Identifiable Information’ (PII) is normally used, but they are generally used to refer to the same (or a very similar) concept. This can be defined as information that can be traced to a particular individual and include such things as name, address, phone number, social security or national identity number, credit card number, email address, passwords and date of birth. There are a number of types of information that could be personal data but are not necessarily so in all circumstances, such as usage data collected from computer devices such as printers; location data; behavioural information such as viewing habits for digital content; users’ recently visited websites or product usage history and online identifiers such as IP addresses, radio-frequency identity (RFID) tags, cookie identifiers and unique hardware identities.

The current European Union (EU) definition of personal data is that

‘personal’ data shall mean any information relating to an identified or identifiable natural person (‘data subject’); an identifable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity; [17, p. 8]

Some personal data elements are considered more sensitive than others, although the definition of what is considered sensitive personal information may vary depending upon jurisdiction and even on particular regulations. In Europe, sensitive personal information is called special categories of data, which refers to information on religion or race, political opinions, health, sexual orientation, trade-union membership and data relating to offences or criminal convictions, and its handling is specially regulated. In the USA, social security and driver’s licence numbers, personal financial information and medical records are commonly treated as sensitive. Health information is considered sensitive by all data protection laws that define this category. Other information that may be considered sensitive includes job performance information, biometric information and collections of surveillance camera images in public places. In general, sensitive information requires additional privacy and security limitations or safeguards because it can be considered as a subset of personal information with an especially sensitive nature.

Key privacy terminology includes the notion of data controller, data processor and data subject. Their meaning is as follows:
  • Data controller  : An entity (whether a natural or legal person, public authority, agency or other body) which alone, jointly or in common with others determines the purposes for which and the manner in which any item of personal information is processed

  • Data processor  : An entity (whether a natural or legal person, public authority, agency or any other body) which processes personal information on behalf and upon instructions of the data controller

  • Data subject  : An identified or identifiable individual to whom personal information relates, whether such identification is direct or indirect (e.g. by reference to an identification number or to one or more factors specific to physical, physiological, mental, economic, cultural or social identity)

The fair information practices developed in the USA in the 1970s [18] and later adopted and declared as principles by the Organisation for Economic Co-operation and Development (OECD) and the Council of Europe [19] form the basis for most data protection and privacy laws around the world. These principles can be broadly described as follows:
  1. 1.

    Data collection limitation: data should be collected legally with the consent of the data subject where appropriate and should be limited to the data that is needed.

  2. 2.

    Data quality: data should be relevant and kept accurate.

  3. 3.

    Purpose specification: the purpose should be stated at the time of data collection.

  4. 4.

    Use limitation: personal data should not be used for other purposes unless with the consent of the individual.

  5. 5.

    Security: personal data should be protected by a reasonable degree of security.

  6. 6.

    Openness: individuals should be able to find out what personal data is held and how it is used by an organization.

  7. 7.

    Individual participation: an individual should be able to obtain details of all information about them held by a data controller and challenge it if incorrect.

  8. 8.

    Accountability: the data controller should be accountable for complying with these principles.


This framework can enable the sharing of personal information across participating jurisdictions without the need for individual contracts. Furthermore, the legislation supports the observation and enforcement of the protection of personal information as a fundamental right.

In Europe, the European Data Protection Directive 95/46/EC (and its supporting country legislation) implements these Fair Information Principles, along with some additional requirements including transborder data flow restrictions. Legislation similar to the European Data Protection Directive has been, and continues to be, enacted in many other countries, including Australia, New Zealand, Hong Kong, Japan and APEC. Notably, legislation in Canada, Argentina, Israel, Switzerland, Guernsey, Iceland, Lichtenstein, Norway, Jersey and the Isle of Man is considered strong enough to be ‘adequate’ by EC. (Adequacy defines how a specific country is considered to have an adequate or inadequate level of protection for processing personal data of subjects from within the European Union countries.) In contrast, the USA does not have a comprehensive regime of data protection but instead has a variety of laws—such as the Health Insurance Portability and Accountability Act (HIPAA)—which are targeted at the protection of particularly sensitive types of information. This US approach to privacy legislation is historically sector-based or enacted at the state level (e.g. the State of Massachusetts has set out appropriate security standards for protecting the personal information of residents of that state) and places few if any restrictions on transborder data flow. The USA is considered adequate for data transfer only under the limitation of the Safe Harbor agreement [20].

At the time of writing, regulations, enforcement activities and sanctions are currently increasing the world over. The USA is introducing a Consumer Privacy Bill of Rights [21], and the EU is revising their Data Protection Directive and regulation [22], with the result that FTC enforcement will be strengthened within the USA and current plans are that European DPAs will be able to impose fines of up to 2 % of worldwide annual turnover to companies that do not have mechanisms in place to underpin regulatory data protection compliance [22]. Other consequences of privacy failure for data controllers include civil liability (whereby data subjects enforce their rights), criminal liability (fines and imprisonment), investment risk, business continuity impact and reputational damage.

To summarize, privacy is regarded as a human right in Europe, whereas in America, it has been traditionally viewed more in terms of avoiding harm to people in specific contexts. It is a complex but important notion, and correspondingly, the collection and processing of personal information is subject to regulation in many countries across the world. Hence, cloud business scenarios need to take this into account.

1.3.2 Security

For the purposes of this book, by security, we mean information security. In this sense, security may be defined as:

Preservation of confidentiality, integrity and availability of information; in addition, other properties such as authenticity, accountability, non-repudiation and reliability can also be involved. [23]

Confidentiality is commonly but erroneously equated with privacy by some security practitioners and is:

The property that information is not made available or disclosed to unauthorized individuals, entities or processes. [23]

Security is a necessary but not a sufficient condition for privacy. Security is actually one of the core privacy principles, as considered in the previous subsection. Correspondingly, it is a common requirement under the law that if a company outsources the handling of personal information or confidential data to another company, it has some responsibility to make sure the outsourcer uses ‘reasonable security’ to protect those data. This means that any organization creating, maintaining, using or disseminating records of PII must ensure that the records have not been tampered with and must take precautions to prevent misuse of the information. Specifically, to ensure the security of the processing of such information, data controllers must implement appropriate technical and organizational measures to protect it against:
  • Unauthorized access or disclosure: in particular where the processing involves the transmission of data over a network

  • Destruction: accidental or unlawful destruction or loss

  • Modification: inappropriate alteration

  • Unauthorized use: all other unlawful forms of processing

Mechanisms to do this include risk assessment, implementing an information security program and putting in place effective, reasonable and adequate safeguards that cover physical, administrative and technical aspects of security. In the case of cloud computing, the CSP needs to implement ‘reasonable security’ when handling personal information.

Privacy differs from security in that it relates to handling mechanisms for personal information, dealing with individual rights and aspects like fairness of use, notice, choice, access, accountability and security. Many privacy laws also restrict the transborder data flow of personal information. Security mechanisms, on the other hand, focus on provision of protection mechanisms that include authentication, access controls, availability, confidentiality, integrity, retention, storage, backup, incident response and recovery. Privacy relates to personal information only, whereas security and confidentiality can relate to all information.

1.3.3 Trust

Here, we give a brief analysis of online trust. Further consideration of key aspects related to trust in the cloud and an assessment of consumer and corporate IT concerns about the cloud is given in Sect. 1.6.

Trust is a complex concept for which there is no universally accepted scholarly definition. Evidence from a contemporary, cross-disciplinary collection of scholarly writing suggests that a widely held definition of trust is as follows:

Trust is a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behaviour of another. [24]

Yet this definition does not fully capture the dynamic and varied subtleties involved. For example: letting the trustees take care of something the trustor cares about [25]; the subjective probability with which the trustor assesses that the trustee will perform a particular action [26]; the expectation that the trustee will not engage in opportunistic behaviour [27]; and a belief, attitude or expectation concerning the likelihood that the actions or outcomes of the trustee will be acceptable or will serve the trustor’s interests [28].

Trust is a broader notion than security as it includes subjective criteria and experience. Correspondingly, there exist both hard (security-oriented) and soft trust (i.e. non-security-oriented trust) solutions [29]. ‘Hard’ trust involves aspects like authenticity, encryption and security in transactions, whereas ‘soft’ trust involves human psychology, brand loyalty and user-friendliness [30]. Some soft issues are involved in security, nevertheless. An example of soft trust is reputation, which is a component of online trust that is perhaps a company’s most valuable asset [31] (although of course a CSP’s reputation may not be justified). Brand image is associated with trust and suffers if there is a breach of trust or privacy.

People often find it harder to trust online services than offline services [32] because in the digital world there is an absence of physical cues and there may not be established centralized authorities [33]. The distrust of online services can even negatively affect the level of trust accorded to organizations that may have been long respected as trustworthy [34].

There are many different ways in which online trust can be established: security may be one of these (although security, on its own, does not necessarily imply trust [31]). Some would argue that security is not even a component of trust: Nissenbaum argues that the level of security does not affect trust [35]. On the other hand, an example of increasing security resulting in increased trust comes from people being more willing to engage in e-commerce if they are assured that their credit card numbers and personal data are cryptographically protected [36].

There can be differing phases in a relationship such as building trust, a stable trust relationship and declining trust. Trust can be lost quickly: as Nielsen states [37], ‘It [trust] is hard to build and easy to lose: a single violation of trust can destroy years of slowly accumulated credibility’. Various approaches have targeted the measurement of factors that influence trust and the analysis of related causal relationships [38]. Many trust metrics have traditionally relied on a graph and have dealt with trust propagation [39, 40]; other techniques used to measure trust include fuzzy cognitive maps [41].

When assessing trust in relation to cloud computing, it may be useful to distinguish between social and technological means of providing persistent and dynamic trust, as all of these aspects of trust can be necessary [42]. Persistent trust is trust in long-term underlying properties or infrastructure; this arises through relatively static social and technological mechanisms. Dynamic trust is trust specific to certain states, contexts or short-term or variable information; this can arise through context-based social and technological mechanisms.

Persistent social-based trust in a hardware or software component or system is an expression of confidence in technological-based trust because it is assurance about implementation and operation of that component or system. In particular, there are links between social-based trust and technological-based trust through the vouching mechanism because it is important to know who is vouching for something as well as what they are vouching; hence, social-based trust should always be considered.

As considered further within Sect. 1.6, there is a complex relationship between security and trust, but in CSP models, security can be a key element in perceived lack of consumer trust.

1.4 Privacy Issues for Cloud Computing

Current cloud services pose an inherent challenge to data privacy because they typically result in data being exposed in an unencrypted form on a machine owned and operated by a different organization from the data owner. The major privacy issues relate to trust (e.g. whether there is unauthorized secondary usage of PII), uncertainty (ensuring that data has been properly destroyed, who controls retention of data, how to know that privacy breaches have occurred and how to determine fault in such cases) and compliance (in environments with data proliferation and global, dynamic flows and addressing the difficulty in complying with transborder data flow requirements). When considering privacy risks in the cloud, as considered already within the introduction, context is very important as privacy threats differ according to the type of cloud scenario. For example, there are special laws concerning treatment of sensitive data, and data leakage and loss of privacy are of particular concern to users when sensitive data is processed in the cloud. Currently, this is so much of an issue that the public cloud model would not normally be adopted for this type of information. More generally, public cloud is the most dominant architecture when cost reduction is concerned, but relying on a CSP to manage and hold one’s data in such an environment raises a great many privacy concerns.

In the remainder of this section, we consider a number of aspects that illustrate best these privacy issues: lack of user control, lack of expertise, potential unauthorized second­ary usage, regulatory complexity (especially due to the global nature of cloud, complex service ecosystems, data proliferation and dynamic provisioning and related difficulties meeting transborder data flow restrictions), litigation and legal uncertainty.

1.4.1 Lack of User Control

User-centric control seems incompatible with the cloud: as soon as a SaaS environment is used, the service provider becomes responsible for storage of data, in a way in which visibility and control is limited. So how can a consumer retain control over their data when it is stored and processed in the cloud? This can be a legal requirement and also something users/consumers want—it may even be necessary in some cases to provide adequate trust for consumers to switch to cloud services.

Key aspects of this lack of user control include:
  1. 1.

    Ownership of and control over the infrastructure: In cloud computing, consumers’ data is processed in ‘the cloud’ on machines they do not own or control, and there is a threat of theft, misuse (especially for different purposes from those originally notified to and agreed with the consumer) or unauthorized resale. See further discussion in Sect. 1.4.3.

  2. 2.

    Access and transparency: It is not clear that it will be possible for a CSP to ensure that a data subject can get access to all his/her PII. There can be lack of transparency about where data is, who owns it and what is being done with it. Furthermore, it is difficult to control (and even know) the exposure of the data transferred to the cloud because information passing through some countries (including the USA, as permitted by the US Patriot Act) can be accessed by law enforcement agencies.

  3. 3.

    Control over data lifecycle: A CSP may not comply with a request for deletion of data. Further detail is given in Sect. 1.5.4. Similarly, it is not necessarily clear who controls retention of data (or indeed what the regulatory requirements are in that respect as there can be a range of different data retention requirements, some of which may even be in conflict).

  4. 4.

    Changing provider: It can also be difficult to get data back from the cloud and avoid vendor lock-in, as considered further in Sect. 1.5.3.

  5. 5.

    Notification and redress: Uncertainties about notification, including of privacy breaches, and ability to obtain redress. It can be difficult to know that privacy breaches have occurred and to determine who is at fault in such cases.

  6. 6.

    Transfer of data rights: It is unclear what rights in the data will be acquired by data processors and their subcontractors and whether these are transferable to other third parties upon bankruptcy, takeover or merger [43].


1.4.2 Lack of Training and Expertise

Deploying and running cloud services may necessitate many jobs requiring high skills, but lack of STEM (science, technology, engineering, and mathematics) graduates in Europe and other parts of the world could make it difficult to recruit suitably qualified people. In particular, lack of trained personnel can be an issue from a security point of view.

In addition, people may lack understanding about the privacy impact of decisions they make. Technology in general exacerbates this problem as more employees are able to trigger privacy consequences, and these can be further-reaching: instead of protecting data on a server to which very few people have access, employees can now leave sensitive information unencrypted on a laptop, or expose confidential information at a flick of a switch. In the case of cloud, it is relatively quick and easy to go to a portal to request a service that is instantly provided, and it only takes a credit card if public cloud services are used like those from Salesforce and Google. Hence, unless proper management procedures are in place, there is a danger that employees could switch to using cloud computing services without adequately considering the consequences and risks for that particular situation.

1.4.3 Unauthorized Secondary Usage

There is a risk (and perhaps even an expectation!) that data stored or processed in the cloud may be put to unauthorized uses. It is part of the standard business model of cloud computing that the service provider may gain revenue from authorized secondary uses of users’ data, most commonly the targeting of advertisements. However, some secondary data uses would be very unwelcome to the data owner (such as, e.g. the resale of detailed sales data to their competitors). Therefore, it may be necessary for consumers and CSPs to make legally binding agreements as to how data provided to CSPs may be used. At present, there are no technological barriers to such secondary uses, although as we consider further in various chapters in this book, it is likely that in future such agreements might be enforceable in a technological sense. This will help enhance trust and mitigate the effects of the blurring of security boundaries.

1.4.4 Complexity of Regulatory Compliance

Due to the global nature of cloud computing and the many legislations in place around the world, it can be complex and difficult to ensure compliance with all the legislation that may apply in a given case.

Putting data in the cloud may impact privacy rights, obligations and status: for example, it may make it impossible to comply with some laws such as the Canadian Privacy Act or health laws. Legal protection can be reduced, and trade secrets may be impacted.

Location matters from a legal point of view as different laws may apply depending on where information exists, but in cloud computing, the information might sometimes be in multiple places simultaneously; it may be difficult to know exactly where it is or it may be in transit. A complicating factor is that there are multiple copies of data located in the cloud. Furthermore, these copies can be managed by different entities: a backup SP, a provider used to respond to peak capacity needs, specialized services, etc.

Correspondingly, central properties of cloud that can make regulatory compliance difficult are data proliferation and dynamic provisioning. We consider these in turn. In addition, it can also be possible to violate local laws when transferring data stored in the cloud: cloud computing exacerbates the transborder data flow issue because it can be extremely difficult to ascertain which specific server or storage device will be used, due to the dynamic nature of cloud computing. These transborder data flow restrictions are a special case that we consider subsequently. Data Proliferation

Data proliferation is a feature of cloud, and this happens in a way that may involve multiple parties and is not controlled by the data owners. CSPs ensure availability by replicating data in multiple data centres. It is difficult to guarantee that a copy of the data or its backups are not stored or processed in a certain jurisdiction, or that all these copies of data are deleted if such a request is made. This issue is considered further in Sect. 1.5.4.

Movement of data onto the cloud and potentially across and between legal jurisdictions, including offshoring of data processing, increases risk factors and legal complexity [43, 44]. Governance and accountability measures also become more complex as processes are outsourced and data crosses organizational boundaries [45]. The risks that can arise from choosing the wrong business partner can be daunting and very difficult to assess, especially in cloud-based environments, where even knowing the jurisdictions involved can be quite difficult [46]. Issues of jurisdiction (i.e. about whose courts would hear a case), which law applies and about whether a legal remedy can be effectively enforced need to be considered [47]. A cloud computing service which combines outsourcing and offshoring may raise very complex issues [48]. Hence, it can be difficult to ascertain privacy compliance requirements in the cloud. Dynamic Provisioning

Cloud computing faces many of the same problems as traditional outsourcing, yet the dynamic nature of cloud makes many existing provisions to address this in more static environments obsolete or impractical to set up in such a short timescale. Model contracts are one example of this that is considered further in the following section. It is not clear which party is responsible (statutorily or contractually) for ensuring legal requirements for personal information are observed, or appropriate data-handling standards are set and followed [49], or whether they can effectively audit third-party compliance with such laws and standards. Neither is it yet clear to what extent cloud subcontractors involved in processing can be properly identified, checked and ascertained as being trustworthy, particularly in a dynamic environment.

1.4.5 Addressing Transborder Data Flow Restrictions

Privacy and data protection regulations restrict transfer of personal information across national borders, which includes restricting both the physical transfer of data and remote access to the data. Transfers from all countries with national legislation are restricted, so this includes EU and European Economic Area (EEA) countries, Argentina, Australia, Canada, Hong Kong and New Zealand. From EU/EEA countries, personal information can be transferred to countries that have ‘adequate protection’, namely, all other EU/EEA member states and also Switzerland, Canada, Argentina and Israel (since all have regulations deemed adequate by the EU). Note that no other countries have privacy regulations that are deemed adequate, so if information is to be sent to these countries, then other approaches need to be used.

One such mechanism is that information can be transferred from an EU country to the USA if the receiving entity has joined the US Safe Harbor agreement [20].

Personal information can however be transferred from any EU/EEA country to any non-EU/EEA country, other than Canada and Argentina, if model contracts have been signed and in many instances approved by the country regulator, or Binding Corporate Rules (BCRs) have been approved, or the individual has ‘freely given’ consent. Model contracts are contractual agreements that contain data protection commitments, company liability requirements and liabilities to the individuals concerned. Transfers from other countries with national privacy legislation (e.g. Canada, Argentina) also require contractual agreement. BCRs are binding internal agreements/contracts that obligate all legal entities within a corporate group that will have access to EU personal information to adhere to all obligations of the EU Data Protection Directive.

The problem is that these techniques (and especially model contracts as currently used) are not well suited to cloud environments. The first reason is due to regulatory complexity and uncertainty in cloud environments, especially due to divergences between the individual European member states’ national laws implementing the European Data Protection Directive, 1995. The second reason is that these techniques are not flexible enough for cloud, because administering and obtaining regulatory approval for model contracts can result in lengthy delays: the notification and prior approval requirements for EU model contracts vary significantly across the EU but are burdensome and can take from 1 to 6 months to set up. BCRs are suitable for dynamic environments, but their scope is limited: they only apply to data movement within a company group, it may be difficult for SMEs to invest in setting these up and there are only a few BCRs to date, although it is a relatively new technique.

It is not just transborder data flow requirements that restrict the flow of information across borders: there may also be trade sanctions and other export restrictions, for example, restriction of cryptography and confidential data from the USA.

Not knowing which routes transnational traffic will take makes it very difficult to understand the particular laws which will apply. However, one interpretation of Section 4 of the Directive 95/46/EC is that transit of data through the territories is not relevant from the legal point of view: for example, if data are transferred from France to the USA, whether the data flows through network links that run via UK and Canada seems to be irrelevant from the legal point of view [7: P103].

Even if transit of data is not relevant to consider, it is still difficult to enforce transborder data flow regulations within the cloud. Cloud computing can exacerbate the problem of knowledge of geographic location of where cloud computing activities are occurring, as due to its dynamic nature this can be extremely difficult to find out.

1.4.6 Litigation

Another aspect is litigation: a CSP may be forced to hand over data stored in the cloud, as illustrated by the US vs. Weaver case [50], where Microsoft was requested via a trial subpoena rather than a warrant to provide emails handled by their Hotmail service. A government only needs to show the requested material is relevant to the case for a subpoena, whereas for a warrant, probable cause must be demonstrated. In order to avoid a similar situation occurring with non-governmental entities, subscribers to cloud services could include contractual provisions in the service agreement that govern the CSP’s response to any subpoena requests from such entities.

1.4.7 Legal Uncertainty

Legal frameworks have been instrumental and key to the protection of users’ personal and sensitive information. As considered briefly in Sect. 1.3.1, in Europe, there is national legislation based upon an EU Directive; in the USA, there is a patchwork of legislation according to sector, information and/or geographical area; and in many other countries worldwide, analogous frameworks apply. The fundamental concepts of such frameworks are in the main technology neutral, and their validity would still apply to cloud computing. Nevertheless, such frameworks—along with the associated tools, advice and national legislation—need to be constantly updated and adjusted with current and future technologies in mind. There is currently a dialogue between organizations, regulators and stakeholders to ensure that the regulatory framework does adapt to new frameworks and business models without eroding consumers’ trust in the systems that are deployed. In particular, the dynamically changing nature of cloud computing, potentially combined with cross-jurisdictional interactions, introduces legal aspects that need to be carefully considered when processing data.

There are existing legal constraints on the treatment of users’ private data by cloud computing providers. Privacy laws vary according to jurisdiction, but EU countries generally only allow PII to be processed if the data subject is aware of the processing and its purpose, and place special restrictions on the processing of sensitive data (e.g. health or financial data), the explicit consent of the data owner being part of a sufficient justification for such processing [51]. They generally adhere to the concept of data minimization, that is, they require that personally identifiable information is not collected or processed unless that information is necessary to meet the stated purposes. In Europe, data subjects can refuse to allow their personally identifiable data to be used for marketing purposes [17]. Moreover, there may be requirements on the security and geographical location of the machines on which personally identifiable data is stored [51]. European law limiting cross-border data transfers also might prohibit the use of cloud computing services to process this data if data would be stored in countries with weak privacy protection laws, and notification may be required [52].

Since cloud technology has moved ahead of the law, there is much legal uncertainty about privacy rights in the cloud and it is hard to predict what will happen when existing laws are applied in cloud environments.

Areas of uncertainty still under current discussion include that the procedure of anonymizing or encrypting personal data may be regarded as regulated ‘processing’, requiring consent, and it is not clear whether that processing for the purpose of enhancing users’ privacy is exempt from privacy protection requirements. Specifically, it can be unclear in practice whether or not data that will be processed is personal data or not, hence whether or not there are legal responsibilities associated with its processing. Anonymization and pseudonymization processes, such as key-coding/obfuscation; fragmenting; deleting ‘identifying information’ such as names, IP addresses, etc.; encryption), may in some circumstances result in personal data but in others not result in personal data under the current definition, and indeed it may not be obvious whether or not the anonymized/pseudonymized data is personal data or not. It follows that it may not be clear, for example, whether or not certain data can be sent outside the EU, or other actions can be performed that are restricted by EU [53].

In general, the legal situation is subject to change: legislation has not yet been updated to address the challenges above, and courts have not yet ruled many cases specifically related to cloud computing.

1.4.8 Privacy Conclusions

In summary, we are seeing the biggest change in privacy since the 1980s and there is uncertainty in all regions. Cloud (and its inherent pressure towards globalization) is helping strain traditional frameworks for privacy. Policymakers are pushing for major change—fast-tracking concepts of fairness, placing more emphasis upon accountability (see Sect. 1.7) and driving increased protection. This includes the draft US Privacy Bill of Rights and the EU data protection framework currently under consideration [21, 22].

Cloud computing offers significant challenges for organizations that need to meet various global privacy regulations, including the complexity of existing global legislation necessitating legal advice. Cloud faces the same privacy issues as other service delivery models, but it can also magnify existing issues, especially transborder data flow restrictions, liability and the difficulty in knowing the geographic location of processing and which specific servers or storage devices will be used. In addition, care must be taken to delete data and virtual storage devices, especially with regard to device reuse; this is considered further in the following section and is both a privacy and a security issue. More broadly, security is an aspect of privacy that is considered further in the next section—hence, many of the issues raised in that section, including the difficulties in enforcing data protection within cloud ecosystems, may be seen to also be privacy issues.

1.5 Security Issues for Cloud Computing

As we shall discuss further in Sect. 1.6.2, security often tops the list of cloud user concerns. Cloud computing presents different risks to organizations than traditional IT solutions. There are a number of security issues for cloud computing, some of which are new, some of which are exacerbated by cloud models and others that are the same as in traditional service provision models. The security risks depend greatly upon the cloud service and deployment model. For example, private clouds can to a certain extent guarantee security levels, but the economic costs associated with this approach are relatively high.

At the network, host and application levels, security challenges associated with cloud computing are generally exacerbated by cloud computing but not specifically caused by it. The main issues relate to defining which parties are responsible for which aspects of security. This division of responsibility is hampered by the fact that cloud APIs are not yet standardized. Customer data security raises a number of concerns, including the risk of loss, unauthorized collection and usage and generally the CSP not adequately protecting data.

There are a number of different ways of categorizing security risks; moreover, these fit into a broader model of cloud-related risks. For example, according to the Cloud Security Alliance [4], the top threats to cloud computing are abuse and nefarious use of cloud computing, insecure interfaces and APIs, malicious insiders, shared technology issues, data loss or leakage, account or service hijacking and unknown risk profile. They were unable to reach a consensus on ranking the degree of severity of these risks.

Abuse and nefarious use could cover a wide variety of threats, largely considered within Sect. 1.5.2 below (unwanted access), but could also include the type of threats considered in Sect. 1.4.3 above (unauthorized secondary usage) or abuse of cloud resources – for example, trying to use as much resource as possible (which could be quite high with a cloud model) without paying or in order to limit access for others. Insecure cloud interfaces and cloud APIs are considered within Sect. 1.5.5 below. Shared technology issues are considered within Sect. 1.5.4 (inadequate data deletion) and Sect. 1.5.7 (isolation failure). Malicious insiders could be considered with respect to a number of scenarios, but especially those considered in Sect. 1.5.2. Some aspects of data exposure have been covered in the previous section (covering privacy issues); others are considered in Sect. 1.5.6 (backup issues) and Sect. 1.5.2 (unwanted access). In this section, we also consider the relative lack of interoperability, assurance, transparency and monitoring in the cloud. In addition we consider how a gap in security can arise in cloud environments. For further details about cloud security issues, see, for example, [54, 55, 56].

1.5.1 Gap in Security

In general, security controls for the cloud are the same as those used in other IT environments. But as the customer cedes control to the cloud provider, there is a related risk that the CSP will not adequately address the security that they should be handling, or even that SLAs do not include any provision of the necessary security services.

This risk is dependent upon the service model used. The lower down the stack the cloud provider, the more security the consumer is responsible for: thus, the consumer of IaaS needs to build in security as they are primarily responsible for it, whereas in SaaS environments, security controls and their scope (as well as privacy and compliance) are negotiated into the contracts for service. The customer may need to understand how the cloud provider handles issues such as patch management and configuration management as they upgrade to new tools and new operating systems, as well as the IT security hardware and software that the cloud provider is using and how the environment is being protected. In the case of IaaS and PaaS, cloud providers need to clarify the kind of IT security the customer is expected to put in place. With SaaS, the customer still needs to provide access security through its own systems, which could either be an identity management system or a local access control application.

Furthermore, it may be difficult to enforce protection throughout the cloud ecosystem. As discussed in Sect. 1.3.2, the CSP needs to implement ‘reasonable security’ when handling personal information. Different companies may be involved in the cloud supply chain, and this can make it difficult to ensure that such security is provided all the way along the chain. At present, clients often only know the initial CSP, and the standard terms and conditions of cloud computing service providers do not include any clauses ensuring the level of security provided: they provide no guarantee as to the security of data and even deny liability for deletion, alteration or loss related to data that is stored. As current terms of service are very much set in favour of the CSP [49], if anything goes wrong, it is often the customer that will be made liable.

1.5.2 Unwanted Access

There needs to be an appropriate level of access control within the cloud environment to protect the security of resources. Cloud computing may actually increase the risk of access to confidential information.

First, this may be by foreign governments: there can be increased risks due to government surveillance over data stored in the cloud, as the data may be stored in countries where previously it was not. Governments in the countries where the data is processed or stored may even have legal rights to view the data under some circumstances [6, 57], and consumers may not be notified if this happens. One example of this is US Patriot Act, as previously mentioned, that is an important concern for many customers considering switching to CSP models.

Second, as with other computing models, there is an underlying risk of unauthorized access that may be exacerbated if entities are involved in the provider chain that have inadequate security mechanisms in place (e.g. if they have inadequate vetting of internal IT staff who have highly privileged access). The risk of data theft from machines in the cloud can be by rogue employees of CSPs, by data thieves breaking into service providers’ machines or even by other customers of the same service if there is inadequate separation of different customers’ data in a machine that they share in the cloud. Attackers may also break into the networks of the CSP, subcontractors or co-hosted customers. Attackers may also use de-anonymization techniques (see [58]). The damage that can be caused in these cases can be greater than non-cloud environments, due to the scale of operation and the presence of certain roles in cloud architectures with potentially extensive access including CSP system administrators and managed security service providers.

In general, cloud storage can be more at risk from malicious behaviour than processing in the cloud because data may remain in the cloud for long periods of time and so the exposure time is much greater. On the other hand, there is more potential for usage of encryption in cloud storage [56].

1.5.3 Vendor Lock-In

Cloud computing, as of today, lacks interoperability standards. Competing architectural standards are being developed, including Open Virtualization Format [59], Open Cloud Computing Interface [60], Data Liberation Front [61], SNIA Cloud Data Management Interface (CDMI) [62] and SAML [63] with big cloud vendors pushing their own mutually incompatible de facto standards. Limitations include differences between common hypervisors, gaps in standard APIs for management functions, lack of commonly agreed data formats and issues with machine-to-machine interoperability of web services. The lack of standards makes it difficult to establish security frameworks for heterogeneous environments and forces people for the moment to rely on common security best practice. As there is no standardized communication between and within cloud providers and no standardized data export format, it is difficult to migrate from one cloud provider to another or bring back data and process it in-house.

1.5.4 Inadequate Data Deletion

Another major issue for cloud is to ensure that the customer has control over the lifecycle of their data, and in particular deletion, in the sense of how to be sure that data that should be deleted really are deleted and are not recoverable by a CSP. There are currently no ways to prove this as it relies on trust, and the problem is exacerbated in cloud because there can be many copies of the data (potentially held by different entities and some of which may not be available) or because it might not be possible to destroy a disk since it is storing other customers’ data.

The risks of data exposure vary according to the service model. Using IaaS or PaaS, one or more VMs are created in order for a program to be run within those—when the task is finished, the VMs and the temporary disk space are released. In fact, IaaS providers can provide storage and VM services which are complementary but allow for persistency of data between usage of multiple VMs. An allocated VM could be started to carry out a task and stopped once the task is completed; this is logically separate from managing the lifecycle of a VM (as the VM can be deleted when the data are no longer needed). Using a SaaS approach, on the other hand, the customer is one of the users of a multi-tenant application developed by the cloud service provider, and the customers’ data is stored in the cloud, to be accessible the next time the customer logs in. The data would only be deleted at the end of the lifecycle of the data, if the customer wishes to change service provider, etc. There is a correspondingly higher risk to the customer if hardware resources are reused than if dedicated hardware is used.

1.5.5 Compromise of the Management Interface

In public cloud service provision, the management interfaces are available via the Internet. This poses an increased risk compared to traditional hosting providers because remote access and web browser vulnerabilities can be introduced and in addition access can be given via these interfaces to larger sets of resources. This increased risk is present even if access is controlled by a password.

1.5.6 Backup Vulnerabilities

Cloud service providers make multiple copies of data and place them in different locations to provide a high level of reliability and performance. This serves as a form of backup, although it can lead to additional liabilities and threats from attackers. There is still the potential for the data to be lost, particularly with Storage as a Service. A popular solution is a type of hybrid storage cloud, where an appliance is placed at the customer’s site and backup data is stored there with a replicated copy sent to a cloud storage service provider. Indeed, one of the top threats identified by CSA [4] is ‘data loss or leakage’, where records may be deleted or altered without a backup of the original content. A record might be unlinked from a larger context, making it unrecoverable; data could be stored on unreliable media, and if there is a key management failure, then data could be effectively destroyed. There have already been cases where backup was provided as an optional extra for a storage service, and a failure in that service resulted in the complete loss of the data of users that had not paid that premium. However, in general, cloud services can be more resilient than traditional services.

1.5.7 Isolation Failure

Multi-tenancy raises a security concern that one consumer may influence the operations or access data of other tenants running on the same cloud [64]. Multi-tenancy is an architectural feature whereby a single instance of software runs on a SaaS vendor’s servers, serving multiple client organizations. The software is designed to virtually partition its data and configuration so that each client organization works with a customized virtual application instance. In such a SaaS model, the customers are users of multi-tenant applications developed by CSPs, it is likely that personal data and even financial data are stored by CSP in the cloud, and it is the responsibility of the CSP to secure the data. There is a risk that the mechanisms that separate storage, memory or routing between different tenants might fail, and hence, for example, other tenants could access sensitive information.

Some providers use job scheduling and resources management [65], but most cloud providers use virtualization to maximize hardware utilization. Virtual machines (VMs) are sandboxed environments and therefore completely isolated from each other. This assumption makes it safe for users to share the same hardware. However, this security can sometime break down, allowing attackers to escape the boundaries of this sandboxed environment and have full access to the host [66]. The use of virtualization can introduce new security vulnerabilities, such as cross-VM side-channel attacks, where the attacker breaches the isolation between VMs allowing extraction of data via information leakage due to the sharing of physical resources [67]; virtual network attacks; inadequate data deletion before memory is assigned to a different customer (cf. Sect. 1.5.4) or ‘escape’ to the hypervisor, where an attacker uses a guest virtual machine to attack vulnerabilities in the hypervisor software [68].

1.5.8 Missing Assurance and Transparency

One approach to privacy and security is to leave protection to the service provider. We have discussed above in Sect. 1.5.1 how expectations in this regard typically vary according to the service model. The cloud customer can in many case transfer risk to the cloud provider (e.g. via SLAs). However, not all risks can be transferred, and ultimately the cloud customer may be legally accountable (e.g. in its role as the data controller). Moreover, the consequences of failure may include reputational damage, legal liability or even business failure, and this is unlikely to be fully compensated for.

So, cloud customers need to obtain assurance from cloud service providers that their data will be protected properly. They may also require that they are notified about security and privacy incidents. Some cloud providers provide information about their data-handling practices, security mechanisms and offer related assurance, for example, SAS-70 type II certification. This type of approach is taken for accounting data, in any case. ENISA has developed a Cloud Computing Information Assurance Framework [69] for this purpose.

However, in some cases, it can be difficult to take this approach, particularly in cases of multiple transfers of data, for reasons considered in the previous section. Other drawbacks include that standard SaaS business models involve repurposing of customer data and furthermore that cloud computing terms of service typically offer no compensation if the customer’s data is stolen or misused.

Various security policy and risk assessment frameworks exist, including good practice guides from UK CESG [70]; NIST 800 series [71]; ISO 27001 [72] and 27002 [73] Information Security Management; ISO 31000 Risk Management [74]; and CSA’s [54], ENISA’s [7] and Shared Assessments’ analysis [75] of risks involved in migration to cloud environments. However, in general, current risk assessment methods have not been designed for use in a cloud computing setting. Liability assignment is also particularly difficult in an international context. Furthermore, it is very difficult and resource-demanding to detect and then prove that electronic data has been compromised and to identify the perpetrator. What is reported to the police is just a small percentage of all violations detected [76].

In current certification schemes, certificates are awarded to traditional, monolithic software systems and become invalid when a system is used in an open, dynamic environment, as in the mash-ups of different services deployed in the cloud. However, CSA’s Trusted Cloud Initiative [77] is working towards certification of ‘trusted clouds’.

Lack of technology support arises because current multijurisdictional regulatory frameworks are extremely complex and teams with strong interdisciplinary skills are needed to address these problems, which are rarely formed. UK ICO has published Privacy Impact Guidelines [78] and a business case for investing in proactive privacy protection, but the certification of properties such as privacy has had only a limited take up to date.

Cloud-based storage of data that requires privacy assurance (such as personal data) is almost always deployed in private clouds. Heterogeneous cloud infrastructures make it difficult to have effective controls to check privacy compliance (often offered as an optional extra) in an automated way, and the end user has no means to verify that his/her privacy requirements are being fulfilled. Effective and profitable utilization of cloud services relies on data transfer and storage across services and different cloud infrastructures (which may have different jurisdictional restrictions).

Furthermore, end-user agreements are stated in natural languages, making it hard for computer programs to assess whether application providers respect data usage agreements. Existing technologies filter information in different ways, including privacy-enhanced access control [79], data loss prevention techniques [80], redaction [81], various privacy-enhancing technologies [82] and database proxies like Informatica’s dynamic data masking tool [83]. W3C has produced a number of standards, including P3P [84] (which is now discontinued although the concepts have subsequently been built on [85]) and tracking standards [86]. Existing auditing frameworks manually verify the adequacy of the data-handling controls used [87, 88, 89]. These procedures are extremely costly.

Automated assurance is necessary to quickly evaluate the evidence that obligations with respect to personal data handling and business compliance requirements are being carried out (for instance, the collection of events showing who created a piece of data, who modified it and how and so on). Governance, risk management and compliance (GRC) frameworks (e.g. RSA eGRC [90]) are a common means of automating compliance in enterprises but do not provide much breadth or strong co-design of technical and legal mechanisms, and although they can target specific regulations, they rarely deal with concepts like privacy and transparency, with the notable exception of recent work within CSA [91].

An open problem is how to find a balance between data provenance and related privacy or other regulatory constraints in the cloud, where physical perimeters are not clearly delimited. The lack of tools to support data localization and transfer across services and cloud infrastructures creates barriers to cross-border considerations and different jurisdictional restrictions [43]. Incompatibilities between jurisdictions affect privacy assurance, and even within the EU, regulatory requirements are defined at a national level and can differ.

1.5.9 Inadequate Monitoring, Compliance and Audit

There are a number of issues related to maintaining and proving compliance when using cloud computing. If a cloud customer migrates to the cloud, their previous investment in security certification may be put at risk if the CSP cannot provide evidence of their compliance with the relevant requirements and does not enable the cloud customer to audit its processing of the customer’s data. Furthermore, it may be difficult to evaluate how cloud computing affects compliance with internal security policies. Certain kinds of compliance (such as PCI DSS) may actually not be achievable within a public cloud infrastructure. The cloud customer may want to monitor that SLAs have been met, but the infrastructure may be very complex and not suited either for provision of the appropriate information or for analysis at the right level.

CSPs need to implement internal compliance monitoring controls, in addition to an external audit process. It may even be that a ‘right to audit’ clause is included in cloud contracts to allow customers to audit the cloud provider, particularly when the customer has regulatory compliance responsibilities. The cloud computing environment presents new challenges from an audit and compliance perspective, but existing solutions for outsourcing and audit can be leveraged. Transactions involving data that resides in the cloud need to be properly made and recorded, in order to ensure integrity of data, and the data owner needs to be able to trust the environment such that no untraceable action has taken place. However, provision of a full audit trail within the cloud, particularly in public cloud models, is still an unsolved issue. In addition, transactional data is a by-product with unclear ownership, and it can be hard to anticipate which data to protect, as even innocuous-seeming data can turn out to be commercially sensitive [45]. Methods for monitoring the cloud’s performance are currently being explored by the CloudAudit working group [91].

Nor are there efficient mechanisms for gathering convincing evidence from verified log data in distributed multi-tenancy environments, even if cloud providers would be willing to provide this. Although there are several existing log approaches, they do not fit cloud computing very well [92]. For example, the EGEE LB log solution [93] in grid computing is mostly used for debugging purposes only. Chukwa [94] is a large-scale log collection and analysis framework built on top of the Hadoop and MapReduce framework; it requires the ownership of all the machines that have data to be logged, which is not realistic in a multi-provider cloud environment.

Security Information and Event Management (SIEM or SIM/SEM) solutions [95], including, for example, RSA enVision [96] and HP ArcSight [97], provide a standardized approach to collect information and events, store and query and provide degrees of correlation, usually driven by rules. SIEM solutions do not cover business audit and strategic (security) risk assessment but instead provide inputs that need to be properly analysed and translated into a suitable format to be used by senior risk assessors and strategic policymakers. This is a painful and quite often manual process, prone to mistakes and errors. Risk assessment standards such as ISO 2700x [72, 73], NIST [98], etc., operate at a macro-level and usually do not fully leverage information coming from logging and auditing activities carried out by IT operations.

Similarly, there exist a number of frameworks for auditing a company’s IT controls, most notably COSO [99] and COBIT [100]. Also, trust services [101] provide a set of principles enabling auditors and CPAs to assess the quality and usefulness of security controls implemented in an enterprise’s infrastructure. With all of these, the gap between low-level monitoring and logging and high-level requirements is not efficiently bridged or well automated.

A few tools for monitoring cloud integrity exist, but in limited scope: Amazon CloudWatch [102] allows EC2 users to do real-time monitoring of their CPU utilization, data transfers and disk usage. Haeberlen has provided a primitive audit for cloud [103] and proposed an approach for accountable virtual machines [104]. HyTrust Appliance [105] is a hypervisor-consolidated log report and policy enforcement tool that logs from a system perspective. Chen and Wang of CSIRO have produced a prototype in which CSPs are made accountable for faulty services and a technique which allows identification of the cause of faults in binding web services and have presented this as ‘accountability as a service’ [106].

Further consideration of audit mechanisms for the cloud is given in Chap. 4.

1.5.10 Security Conclusions

There are a number of security issues for cloud, and these depend upon the service provision and deployment models. A number of open issues remain, including audit. Availability may be an issue for public clouds—the future speed and global availability of network access required to use them may prevent widespread adoption in the short to medium term.

Overall, security need not necessarily suffer in moving to the cloud model, because there is scope for security to be outsourced to experts in security and hence in many cases greater protection than previously can be obtained. The major issues are probably to do with selection of service providers with suitable controls in place and to do with privacy and are context-dependent.

1.6 Trust Issues for Cloud Computing

This section builds upon the analysis given in Sect. 1.3 to briefly consider the main trust issues for cloud computing. We consider the concerns of cloud customers, who may be either citizen end users, or else organizations using cloud (providing information to CSPs that may be personal information of their customers, business-confidential information, information about end users or employees, etc.). We also consider weak trust relationships within the cloud service provision ecosystem and the lack of consensus about the trust management approaches to be used for cloud.

1.6.1 Trust in the Cloud

In traditional security models, a security perimeter is set up to create a trust boundary within which there is self-control over computing resources and where sensitive information is stored and processed. For example, the corporate firewall often marks this boundary. The network provides transit to other trusted end hosts, which operate in a similar manner. This model held for the original Internet, but does not do so for public and hybrid cloud. The security perimeter becomes blurred in the sense that confidential information may be processed outside known trusted areas as these computing environments often have fuzzy boundaries as to where data is stored and/or processed. On the other hand, in order to obtain the service, consumers need to extend their trust to the cloud service provider, and so this can provide a point of friction, as considered further below.

In assessing cloud computing provision, mechanisms to provide dynamic technological-based trust need to be used in combination with social and technological mechanisms for providing persistent trust: if software processes provide information about the way in which information is stored, accessed and shared within a cloud, that information can only be trusted if entities that are trusted vouch for the method of providing the information and assessing the information. Depending upon the context, these entities could be consumer groups, auditors, security experts, regulators, companies with proven reputation, established CSPs, etc. Moreover, trust relationships can be very much at the centre of certain security and privacy solutions: for instance, in particular for key escrow and other forms of key distribution and secret sharing, audit, compliance checking and pseudonymization. There is also a strong link with policy development: if personal or business critical information is to be stored in the cloud, trust attains a new level of importance and CSPs need to embrace such an approach [107].

1.6.2 Lack of Consumer Trust

Of the European citizens surveyed in June 2011 about their attitudes on data protection [108], it was found that authorities and institutions—including the European Commission and the European Parliament (trusted by 55 % of people surveyed)—are trusted more than commercial companies. In fact, less than one-third trust phone companies, mobile phone companies and Internet service providers (32 %), and just over one-fifth trust Internet companies such as search engines, social networking sites and email services (22 %). Furthermore, 70 % of Europeans, according to this study, are concerned that their personal data held by companies may be used for a purpose other than that for which it was collected. In a recent Cloud Industry Forum survey, the results of ‘how do you trust an online provider?’ were reputation (29 %), recommendation from trusted party (27 %), trial experience (20 %), contractual (20 %) and others (4 %) [3].

Organizations handling personal information have a legal and moral obligation of ensuring privacy and thereby demonstrating the trustworthy nature of their service. Important questions to address include whether data is safe across all of the cloud, it is handled based upon users’ expectations, data handling is compliant with laws and regulations, data is under control along its complete lifecycle, appropriate use and obligations are ensured along the processing chain and there are standards or general practices in place for operating in the cloud. There is a lack of confidence about cloud, and the answers to these questions at present, as seen from the results of a number of recent surveys considered below.

Business users recognize the cloud’s advantages in speeding innovation, accelerating business processes and reducing time to revenue. Correspondingly, businesses are already moving to the cloud: an IDC study found that 70 % of businesses are considering or already using private clouds [109].

However, CIOs are more wary. A recent study by Forrester found that business is adopting cloud 2.5 times faster than IT operations [110]. Enterprise IT executives cite well-founded concerns about the challenges of maintaining security, service levels and governance seamlessly across the entire IT value chain [111]. They also want to be sure the decisions they make today about cloud technology suppliers do not prevent them from innovating in the future. Hence, a number of critical challenges need to be addressed in order to encourage cloud adoption in enterprises. These key barriers to cloud adoption include:
  • 79 % were concerned about vendor lock-in [2].

  • 75 % were worried about cloud performance and availability [2].

  • 70 % of CIOs said cloud data security is a major concern [112].

  • 63 % were concerned about integrating internal and external services [2].

Figure 1.2 shows the percentage of respondents ranking the top three barriers to moving to cloud computing, based on the McKinsey Global Survey results in October 2010 (where n is the total number of respondents), and how security was the top concern for IT executives. Similarly, security was rated the top challenge of the cloud model in the IDC 2009 cloud user survey [2], and concerns about security also topped the list of primary barriers chosen as stopping UK businesses from making the transition to cloud computing, with lack of confidence coming second in a recent Cloud Industry Forum survey [3]. Similarly, in a recent 2010 survey by Fujitsu Research Institute [113] on potential cloud customers, it was found that 88 % of potential cloud consumers are worried about who has access to their data, and demanded more awareness of what goes on in the backend physical server. According to an interview study on the views of security and user experience experts on trust in cloud services [114], the most important factor affecting perceived trust in cloud services is brand, with security and privacy as the second most important aspect and transparency and reliability as the third most important aspect, with good auditing and agreement policies also being deemed important. Businesses are sceptical about the promises that many cloud vendors and suppliers are making: 62 % of respondents felt that when looking for a supplier, a code of practice would be important, while a further 28 % considered it essential in their selection process [3].
Fig. 1.2

Barriers to cloud technology

Since customers lack control of cloud resources, they are not in a good position to utilize technical mechanisms in order to protect their data against unauthorized access or secondary usage or other forms of misuse. Instead, they must rely on contracts or other trust mechanisms to try to encourage appropriate usage, in combination with mechanisms that provide compensation in the event of a breach, such as insurance, court action or penalties for breach of SLAs.

When it is not clear to individuals why their personal information is requested, or how and by whom it will be processed, this lack of control and lack of visibility of the provider supply chain will lead to suspicion and ultimately distrust [115]. There are also security-related concerns about whether data in the cloud will be adequately protected, as considered above. As a result, customers may hold back from using cloud services where personally identifiable information is involved, without an understanding of the obligations involved and the compliance risks faced, and assurance that potential suppliers will address such risks. This is particularly the case where sensitive information is involved, for example, financial and healthcare information.

1.6.3 Weak Trust Relationships

Trust relationships at any point in the cloud service delivery chain may be weak, but exist in order that a service can be provided quickly. Significant business risk may be introduced in a way that is not transparent when a cloud transaction is initiated, due to loss of control in passing sensitive data to other organizations and the globalized nature of cloud infrastructure. Organizations that contract out key business processes may not even know that contractors are subcontracting, or even if they do, contract requirements regarding data protection measures may not be propagated down the contracting chain.

Trust along the chain from the customer to cloud providers at all levels may be non-transitive, and in particular the customer may not trust some of the subcontractors (XaaS providers). Indeed, due to a lack of transparency, they may not even be aware of the identity of the cloud providers in this chain. In particular, ‘on-demand’ and ‘pay-as-you-go’ models may be based on weak trust relationships, involve third parties with lax data security practices, expose data widely and make deletion hard to verify. In order to provide extra capacity at short notice or in real time, new providers could be added to the chain for which there is not sufficient chance to make adequate checks about their identity, practices, reputation and trustworthiness.

1.6.4 Lack of Consensus About Trust Management Approaches to Be Used

There is a lack of consensus about what trust management approaches should be used for cloud environments. The inherent complexity of trust, the subjectivity of some factors and the difficulty of contextual representation makes trust measurement a major challenge. Artz and Gil [116] provide facets of trust that can be measured for assessment purposes. Standardized trust models are needed for verification and assurance of accountability, but none of the large number of existing trust models to date is adequate for the cloud environment [117]. There are many trust models which strive to accommodate some of the factors defined by Marsh [118] and others [119], and there are many trust assessment mechanisms which aim to measure them. These tend to be developed in isolation, and there has been little integration between hard and soft trust solutions. No suitable metrics exist for accountability, only a very high-level consideration to date [120]. Furthermore, there is no current consensus on the types of evidence required to verify the effectiveness of trust mechanisms [121]. Although the CloudTrust protocol [122] defines some categories, it has not covered others such as legal liability of the parties involved.

1.6.5 Trust Conclusions

Trust is widely perceived as a key concern, for end-user consumers, organizational customers and regulators. Lack of consumer trust is a key inhibitor to adoption of cloud services. People are suspicious about what happens to their data once it goes into the cloud. They are worried about who can access it and how it will be copied, shared and used, and they feel that they are losing control. Companies who change from carrying out their computing in-house to using the public cloud are not so much concerned any more about the health of servers, but instead the confidentiality and security of their data. Regulators fear that jurisdictional controls and compliance will weaken with cloud. All parties are concerned about potential access by certain foreign governments if sensitive data is moved to be stored within those countries.

Ultimately, usage of the cloud is a question of trade-offs between security, privacy, compliance, costs and benefits. Trust is key to adoption of SaaS, and transparency is an important mechanism. Furthermore, trust mechanisms need to be propagated right along the chain of service provision. We consider in the following section some solutions to these issues.

1.7 Approaches to Addressing Privacy, Security and Trust Issues

In this section, we present a brief overview of solutions and research in progress that aim to help address the concerns of privacy, security and trust in the cloud.

Overall, progress in this area demands that there should be consistent and co-ordinated development in three main dimensions:
  • Innovative regulatory frameworks: such as accountability [123], which can facilitate both the operation of global business and provision of redress within cloud environments [124].

  • Responsible company governance: whereby organizations act as a responsible steward of the data which is entrusted to them within the cloud, ensuring responsible behaviour via accountability mechanisms [125] and balancing innovation with individuals’ expectations—Privacy by Design [126, 127, 128] being a way of achieving this.

  • Supporting technologies: these include privacy-enhancing technologies [82, 129], security mechanisms [130], encryption [131], anonymization [132], etc.

By using a combination of these means, users and citizens can be provided with reassurance that their personal data will be protected, and cloud deployments can be made compliant with regulations, even within countries where such regulation is relatively strict.

We discuss in the rest of this book a number of mechanisms that illustrate promising approaches addressing the privacy, security and trust concerns considered above. Many of these mechanisms bridge the dimensions described above, or provide the basis for services that can be offered to organizations in order to enhance privacy and security. In particular, Chap. 2 bridges all three areas by considering law enforcement, Chap. 3 discusses how technical means can support regulatory compliance, and Chap. 4 shows how technology can help with corporate governance and provision of assurance. Chaps. 5 and 6 discuss technical mechanisms for enhancing security in the cloud, based upon trusted technologies. Chapters 7 and 8 consider the mitigation of information-security risks within an enterprise, based upon technical and procedural means.

1.8 Conclusions

In this chapter, we have assessed some of the key privacy and security issues involved in moving to cloud scenarios and set out the basis of some approaches that address the situation. Many of these themes are developed and explored in the subsequent chapters of this book.

Cloud models of service provision and the closely related capacity for big data processing and extended data mining allow new innovative approaches based upon increased value of personal information. At the same time, this increased business use of personal information can be very contentious, and so mechanisms need to be provided so that individuals can retain control over it. In particular, more information is known, recorded and accessible, making it difficult for people not to be judged on the basis of past actions. Profiling of individuals by business is becoming much more extensive and powerful, and governments, too, are connecting information about citizens, and their ability for much stronger surveillance is steadily increasing, making ‘Big Brother’-type scenarios more possible over time.

Current cloud services pose an inherent challenge to data privacy because they typically result in data being present in unencrypted form on a machine owned and operated by a different organization from the data owner. There are threats of unauthorized uses of the data by service providers and of theft of data from machines in the cloud. Fears of leakage of sensitive data [4] or loss of privacy are a significant barrier to the adoption of cloud services [5, 51]. These fears may be justified. For instance, in 2007, criminals targeted the prominent CSP and succeeded in stealing customer emails and addresses using a phishing attack [44]. Moreover, there are laws placing geographical and other restrictions on the processing by third parties of personal and sensitive information. These laws place limits on the use of cloud services as currently designed.

The large pools of resources made available through cloud computing are not necessarily located in the same country nor even on the same continent. Furthermore, the dynamic expansion or shrinkage of a cloud makes it difficult to keep track of what resources are used and in which country. This makes compliance with regulations related to data handling difficult to fulfil. Auditing is also a challenging task due to the volatility of the resources used.

The advantages of cloud computing—its ability to scale rapidly (through subcontractors), store data remotely (in unknown places) and share services in a dynamic environment—can become disadvantages in maintaining a level of assurance sufficient to sustain confidence in potential customers. These new features raise issues and concerns that need to be fully understood and addressed. Some of these issues will be shared with other paradigms, such as service-oriented architectures (SOA), grid, web-based services or outsourcing, but often, they are exacerbated by cloud. Privacy and security solutions need to address a combination of issues, and this may require new and even unique mechanisms rather than just a combination of known techniques for addressing selected aspects. The speed and flexibility of adjustment to vendor offerings, which benefits business and motivates cloud computing uptake, brings a higher risk to data privacy and security. This is a key user concern, particularly for financial and health data, and the associated lack of trust can be a key business inhibitor for cloud computing in domains where confidential or sensitive information is involved.

Responsible management of personal data is a central part of creating the trust to underpin adoption of cloud-based services and thereby to encourage customers to use cloud-based services. Privacy protection builds trust between service providers and users: accountability and privacy by design provide mechanisms to achieve the desired end effects and create this trust. This management can span a number of layers: policy, process, legal and technological. It is universally accepted as best practice that such mechanisms should be built in as early as possible into a system’s lifecycle. Indeed, conforming to legal privacy requirements and meeting client privacy and security expectations with regard to personal information require corporations to demonstrate a context-appropriate level of control over such data at all stages of its processing, from collection to destruction.

In this chapter, we have stressed the importance of context to privacy and security requirements. As a result, solutions often need to be tailored to a specific context. In general, customers considering cloud services should consider their organization’s operational, security, privacy and compliance requirements to see what approach would best suit them. Technology may help this decision-making process.

In summary, cloud providers need to safeguard the privacy and security of personal and confidential data that they hold on behalf of organizations and users. In particular, it is essential for the adoption of public cloud systems that consumers and citizens are reassured that privacy and security is not compromised. It will be necessary to address the problems of privacy and security raised in this chapter in order to provide and support trustworthy and innovative cloud computing services that are useful for a range of different situations.



The influence and input contributing to development of the ideas in this chapter of various colleagues is gratefully acknowledged, notably Daniel Pradelles.


  1. 1.
    Mell, P., Grance, T.: A NIST definition of cloud computing. National Institute of Standards and Technology. NIST SP 800–145. (2009)
  2. 2.
  3. 3.
    Cloud Industry Forum: Transition to the Cloud: The case for a code of practice. CIF Report. (2011)
  4. 4.
    Cloud Security Alliance: Top Threats to Cloud Computing. v1.0, Mar (2010)Google Scholar
  5. 5.
    Horrigan, J.B.: Use of cloud computing applications and services. Pew Internet & American Life project memo, Sept (2008)Google Scholar
  6. 6.
    Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act (USA PATRIOT ACT) Title V, s 505 (2001)Google Scholar
  7. 7.
    Catteddu, D., Hogben, G. (eds.): Cloud computing: Benefits, risks and recommendations for information security. ENISA Report, Nov. (2009)
  8. 8.
    Marchini, R.: Cloud Computing: A Practical Introduction to the Legal Issues. BSI, London (2010)CrossRefGoogle Scholar
  9. 9.
    McKinley, P.K., Samimi, F.A., Shapiro, J.K., Chiping, T.: Service clouds: a distributed infrastructure for constructing autonomic communication services. In: Dependable, Autonomic and Secure Computing, IEEE, 12–14 Dec 2011, Sydney, Australia, 341–348 (2006)Google Scholar
  10. 10.
    Warren, S., Brandeis, L.: The right to privacy. Harv. Law Rev. 4, 193 (1890)CrossRefGoogle Scholar
  11. 11.
    Westin, A.: Privacy and Freedom. Atheneum, New York (1967)Google Scholar
  12. 12.
    American Institute of Certified Public Accountants (AICPA) and CICA: Generally accepted privacy principles. Aug. (2009)
  13. 13.
    Solove, D.J.: A taxonomy of privacy. Univ. Pennsylvania Law Rev. 154(3), 477, Jan. (2006)
  14. 14.
    Nissenbaum, H.: Privacy as contextual integrity. Washington Law Rev. 79, 101–139 (2004)Google Scholar
  15. 15.
    Nissenbaum, H.: Privacy in Context: Technology, Policy and the Integrity of Social Life. Stanford University Press, Stanford (2009)Google Scholar
  16. 16.
    Swire, P.P., Bermann, S.: Information Privacy: Official Reference for the Certified Information Privacy Professional, CIPP. International Association of Privacy Professionals, York (2007)Google Scholar
  17. 17.
    European Commission: Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. (1995)
  18. 18.
    Privacy Protection Study Commission: Personal privacy in Information society, United States Privacy Protection Study Commission fair information practices. (1977)
  19. 19.
    Organization for Economic Co-operation and Development (OECD): Guidelines for the protection of personal data and transborder data flows.,3746,en_2649_34223_1815186_1_1_1_1,00.html (1980)
  20. 20.
    Safe Harbor website: (2012)
  21. 21.
    The White House: Consumer data privacy in a networked world: a framework for protecting privacy and promoting innovation in the global digital economy, Feb. (2012)Google Scholar
  22. 22.
    European Commission: Proposal for a Directive of the European Parliament and of the Council on the protection of individuals with regard to the processing of personal data by competent authorities for the purposes of prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and the free movement of such data, Jan. (2012)
  23. 23.
    ISO: 27001: Information Security Management – Specification with Guidance for Use. ISO, London (2005)Google Scholar
  24. 24.
    Rousseau, D., Sitkin, S., Burt, R., Camerer, C.: Not so different after all: a cross-discipline view of trust. Acad. Manage. Rev. 23(3), 393–404 (1998)CrossRefGoogle Scholar
  25. 25.
    Baier, A.: Trust and antitrust. Ethics 96(2), 231–260 (1986)Google Scholar
  26. 26.
    Gambetta, D.: Can we trust trust? In: Gambetta, D. (ed.) Trust: Making and Breaking Cooperative Relations. Basil Blackwell, New York (1988)Google Scholar
  27. 27.
    Nooteboom, B.: Social capital, institutions and trust. Rev. Soc. Econ. 65(1), 29–53 (2007)CrossRefGoogle Scholar
  28. 28.
    Sitkin, S., Roth, N.: Explaining the limited effectiveness of legalistic ‘remedies’ for trust/distrust. Org. Sci. 4, 367–392 (1993)CrossRefGoogle Scholar
  29. 29.
    Wang, Y., Lin, K.-J.: Reputation-oriented trustworthy computing in e-commerce environments. Internet Comput. IEEE 12(4), 55–59 (2008)CrossRefGoogle Scholar
  30. 30.
    Singh, S., Morley, C.: Young Australians’ privacy, security and trust in internet banking. In: Proceedings of the 21st Annual Conference of the Australian Computer-Human Interaction Special interest Group: Design: Open 24/7 (2009)Google Scholar
  31. 31.
    Osterwalder, D.: Trust through evaluation and certification. Soc. Sci. Comput. Rev. 19(1), 32–46 (2001)CrossRefGoogle Scholar
  32. 32.
    Best, S.J., Kreuger, B.S., Ladewig, J.: The effect of risk perceptions on online political participatory decisions. J. Inform. Technol. Polit. 4, 5–17 (2005)CrossRefGoogle Scholar
  33. 33.
    Chang, E., Dillon, T., Calder, D.: Human system interaction with confident computing: the megatrend. In: Proceedings of the Conference on Human System Interactions, Krakow, Poland (2008)Google Scholar
  34. 34.
    Jaeger, P.T., Fleischmann, K.R.: Public libraries, values, trust, and e-government. Inf. Technol. Libr. 26(4), 35–43 (2007)Google Scholar
  35. 35.
    Nissenbaum, H.: Can trust be secured online? A theoretical perspective. Etica e Politica, 2 (1999)Google Scholar
  36. 36.
    Giff, S.: The influence of metaphor, smart cards and interface dialogue on trust in e-commerce. M.Sc. project, University College, London (2000)Google Scholar
  37. 37.
    Nielsen, J.: Trust or bust: communicating trustworthiness in web design. Jacob Nielsen’s Alertbox. (1999)
  38. 38.
    Huynh, T.: A personalized framework for trust assessment. ACM Symp. Appl. Comput. 2, 1302–1307 (2008)Google Scholar
  39. 39.
    Leiven, R.: Attack resistant trust metrics. Ph.D. thesis, University of California, Berkeley (2003)Google Scholar
  40. 40.
    Ziegler, C.N., Lausen, G.: Spreading activation models for trust propagation. In: EEE 2004, IEEE, Taipei (2004)Google Scholar
  41. 41.
    Kosko, B.: Fuzzy cognitive maps. Int. J. Man-Mach. Stud. 24, 65–75 (1986)CrossRefMATHGoogle Scholar
  42. 42.
    Pearson, S., Casassa Mont, M., Crane, S.: Persistent and dynamic trust: analysis and the related impact of trusted platforms. In: Herrmann, P., Issarny, V., Shiu, S. (eds.) Trust Management, Proc. iTrust 2005, LNCS 3477, pp. 355–363. Springer-Verlag, Berlin/Heidelberg/Paris (2005)Google Scholar
  43. 43.
    Gellman, R.: Privacy in the clouds: risks to privacy and confidentiality from cloud computing. World Privacy Forum. (2009)
  44. 44.
    Greenberg, A.: Cloud computing’s stormy Side. Forbes Magazine, 19 Feb (2008)Google Scholar
  45. 45.
    Fratto, M.: Internet evolution. The Big Report, Cloud Control. (2009)
  46. 46.
    Hall, J.A., Liedtka, S.L.: The Sarbanes-Oxley Act: implications for large-scale IT outsourcing. Commun. ACM 50(3), 95–100 (2007)CrossRefGoogle Scholar
  47. 47.
    Reidenberg, J.: Technology and internet jurisdiction. Univ. Pennsylvania Law Rev.1, SSRN eLibrary (2005)Google Scholar
  48. 48.
    Kohl, U.: Jurisdiction and the Internet. Cambridge University Press, Cambridge (2007)CrossRefGoogle Scholar
  49. 49.
    Mowbray, M.: The fog over the Grimpen Mire: cloud computing and the law. Script-ed J. Law, Technol. Soc. 6(1), 132–143 (Apr 2009)CrossRefGoogle Scholar
  50. 50.
    Goldberg, N.M., Wildon-Byrne, M.: Securing communications on the cloud. Bloomberg Law Rep.—Technol. Law. 1(10). (2009)
  51. 51.
    Salmon, J.: Clouded in uncertainty—the legal pitfalls of cloud computing. Computing Magazine, 24 Sept. (2008)
  52. 52.
    Crompton, M.:, Cowper, C., Jefferis, C.: The Australian Dodo Case: an insight for data protection regulation. World Data Protection Report. 9(1), BNA (2009)Google Scholar
  53. 53.
    Hon, K.: Personal data in the UK, anonymisation and encryption. Queen Mary University of London, 9 June. (2011)
  54. 54.
    Cloud Security Alliance: Security guidance for critical areas of focus in cloud computing. v2.1, English language version, Dec. (2009)
  55. 55.
    Mather, T., Kumaraswamy, S., Latif, S.: Cloud Security and Privacy. O’Reilly, Sebastopol, CA (2009)Google Scholar
  56. 56.
    Vaquero, L., Rodero-Merino, L., Morán, D.: Locking the sky: a survey on IaaS cloud security. Computing 91, 93–118 (2011)CrossRefMATHGoogle Scholar
  57. 57.
    Regulation of Investigatory Powers Act: Part II, s 28, UK (2000)Google Scholar
  58. 58.
    Narayanan, A., Shmatikov, V.: Robust deanonymization of large sparse datasets. IEEE Symp. Sec. Privacy (S&P) 111–125 (2008). doi:10.1109/SP.2008.33
  59. 59.
  60. 60.
    Open Cloud Computing Interface (OCCI): (2012)
  61. 61.
    Google: Data liberation front. (2012)
  62. 62.
    SNIA: Cloud data management interface. (2012)
  63. 63.
    OASIS. Security Assertion Markup Language (SAML). (2005)
  64. 64.
    Wei, J., Zhang, X., Ammons, G., Bala, V., Ning, P.: Managing security of virtual machine images in a cloud environment. In: Proceedings of the CCSW ‘09. ACM, New York, pp. 91–96 (2009)Google Scholar
  65. 65.
  66. 66.
    Kortchinsky, K.: CLOUDBURST: A VMWare Guest to Host Escape Story. BlackHat, Las Vegas (2009)Google Scholar
  67. 67.
    Ristenpart, T., Tromer, E., Shacham, H., Savage, S.: Hey, you, get off of my cloud: exploring information leakage in third-party compute clouds. In: Proceedings of CCS’09, ACM, Chicago, Nov (2009)Google Scholar
  68. 68.
    IBM: X-force® 2010 mid-year trend and risk report. Aug. (2010)
  69. 69.
    ENISA: Cloud computing information assurance framework. In: Catteddu, D., Hogben, G. (eds.), Nov. (2009)
  70. 70.
    UK Cabinet Office and CESG: HMG information assurance maturity model and assessment framework. (2010)
  71. 71.
    Jansen, W., Grance, T.: Guidelines on security and privacy in public cloud computing. NIST Special Publication 800–144, Dec (2011)Google Scholar
  72. 72.
    International Organisation for Standardisation (ISO): ISO/IEC 27001:2005 Information technology—security techniques—information security management systems—requirements. (2005)
  73. 73.
    ISO: ISO/IEC 27002:2005 Information technology—Security techniques—Code of practice for information security management. (2005)
  74. 74.
    ISO: ISO 31000:2009 Risk management—Principles and guidelines. (2009)
  75. 75.
    Shared Assessments: Evaluating cloud risk for the enterprise. The Santa Fe Group, Oct. (2010)
  76. 76.
    Hagen, J.M., Sivertsen, T.K., Rong, C.: Protection against unauthorized access and computer crime in Norwegian enterprises. J. Comput. Secur. 16(3), 341–366 (2008)CrossRefGoogle Scholar
  77. 77.
    CSA: Trusted cloud initiative. (2012)
  78. 78.
    Information Commissioner’s Office (ICO): Privacy impact assessment handbook. Version 2, June. (2009)
  79. 79.
    Ardagna, C.A., et al.: Exploiting cryptography for privacy-enhanced access control. J. Comput. Soc. 18(1), 123–160 (2010) (IOS Press)Google Scholar
  80. 80.
    Data Loss Prevention: (2012)
  81. 81.
    Bier, E., et al.: The rules of redaction: identify, protect, review (and repeat). Secur. Privacy, IEEE 7(6), 46–53 (2009)CrossRefGoogle Scholar
  82. 82.
    Information Commissioner’s Office UK ICO: Data protection guidance note: Privacy enhancing technologies: (2007)
  83. 83.
  84. 84.
    Cranor, L.: Web Privacy with P3P. O’Reilly and Associates, Sebastopol, CA (2002)Google Scholar
  85. 85.
    EnCoRe: Ensuring Consent and Revocation project: (2012)
  86. 86.
    Cachin, C., Schunter, M.: A cloud you can trust. Dec. (2011)
  87. 87.
  88. 88.
    SysTrust and WebTrust:
  89. 89.
  90. 90.
  91. 91.
  92. 92.
    Takabi, H., Joshi, J.B.D., Ahn, G.: Security and privacy challenges in cloud computing environments. Secur. Privacy, IEEE 8(6), 24–31 (2010)CrossRefGoogle Scholar
  93. 93.
    EGEE project: Logging and Bookkeeping (LB) service. (2012)
  94. 94.
  95. 95.
    Nicolett, M., Kavanagh, K.M.: Critical capabilities for security information and event management technology, Gartner Report (2011)Google Scholar
  96. 96.
    RSA: EnVision platform. (2012)
  97. 97.
    HP: ArcSight. (2012)
  98. 98.
    Stoneburner, G., Goguen, A., Feringa, A.: Risk management guide for information technology systems: recommendations of the National Institute of Standards and Technology. Special publication 800–30, July (2002)Google Scholar
  99. 99.
    Committee of Sponsoring Organisations of the Treadway Commission (COSO): (2012)
  100. 100.
    ISACA: (2012)
  101. 101.
  102. 102.
    Amazon: CloudWatch. (2012)
  103. 103.
    Haeberlen, A.: A case for the accountable cloud. ACM SIGOPS OS Rev. 44(2), 52–57 (2010)CrossRefGoogle Scholar
  104. 104.
    Haeberlen, A., et al.: Accountable virtual machines. In: Proceedings of the OSDI’10, USENIX, Vancouver, Canada (2010)Google Scholar
  105. 105.
  106. 106.
    Chen, S., Wang, C.: Accountability as a service for the cloud: from concept to implementation with BPEL. In: Proceedings of the 6th IEEE World Congress on Services, IEEE, pp. 91–98 (2010)Google Scholar
  107. 107.
    Jaeger, P., Lin, J., Grimes, J.: Cloud computing and information policy: computing in a policy cloud? J. Inf. Technol. Polit. 5, 269–283 (2008)CrossRefGoogle Scholar
  108. 108.
    European Commission: Attitudes on data protection and electronic identity in the European Union. June. (2011)
  109. 109.
    IDC: Cloud computing attitudes, Survey, Doc.#223077 (2010)Google Scholar
  110. 110.
    Forrester Research, Inc.: Ignoring cloud risks: a growing gap between I&O and the business. Mar (2011)Google Scholar
  111. 111.
    Forrester Research, Inc.: You’re not ready for internal cloud. July (2010)Google Scholar
  112. 112.
    Goldman Sachs: Equity Research, Jan (2011)Google Scholar
  113. 113.
    Fujitsu Research Institute: Personal data in the cloud: a global survey of consumer attitudes. (2010)
  114. 114.
    Uusitalo, I., Karppinen, K., Arto, J., Savola, R.: Trust and cloud services – an interview study. In: Proceedings of the CloudCom 2010, IEEE, Indianapolis (2010)Google Scholar
  115. 115.
    Lacohé, H., Crane, S., Phippen, A.: Trustguide Final Report, October. DTI Sciencewise Programme. (2006)Google Scholar
  116. 116.
    Artz, D., Gil, Y.: A survey of trust in computer science and the semantic web. Web Semant. Sci. Serv. Agents World Wide Web 5, 58–71 (2007)CrossRefGoogle Scholar
  117. 117.
    Li, W., Ping, L.: Trust model to enhance security and interoperability of cloud environment. In: Cloud Computing. Lecture Notes in Computer Science, vol. 5931, pp. 69–79. Springer, Berlin (2009)CrossRefGoogle Scholar
  118. 118.
    Marsh, S.: Formalising trust as a computational concept. Doctoral dissertation, University of Stirling (1994)Google Scholar
  119. 119.
    Banerjee, S., Mattmann, C., Medvidovic, N., Golubchik, L.: Leveraging architectural models to inject trust into software systems. In: Proceedings of SESS ‘05, pp. 1–7. ACM, New York (2005)Google Scholar
  120. 120.
    The Centre for Information Policy Leadership (CIPP): Demonstrating and measuring accountability: a discussion document. Accountability Phase II—The Paris Project. (2010)Google Scholar
  121. 121.
    Shin, D., Ahn, G.-J.: Role-based privilege and trust management. Comput. Syst. Sci. Eng. J. 20(6), 401–410 (2005)Google Scholar
  122. 122.
    CSA: Cloud trust protocol (2012).
  123. 123.
    Weitzner, D.J., Abelson, H., Berners-Lee, T., Feigenbaum, J., Hendler, J., Sussman, G.J.: Information accountability. Commun. ACM 51(6), 87 June (2008)CrossRefGoogle Scholar
  124. 124.
    Pearson, S., Charlesworth, A.: Accountability as a way forward for privacy protection in the cloud. In: Jaatun, M.G., Zhao, G., Rong, C. (eds.) Proceedings of the 1st International Conference on Cloud Computing (CloudCom 2009), Beijing, Dec. LNCS, vol. 5931, pp. 131–144. Springer, Berlin (2009)Google Scholar
  125. 125.
    Pearson, S., et al.: Scalable, accountable privacy management for large organizations. In: INSPEC 2009, IEEE, Sept, pp. 168–175 (2009)Google Scholar
  126. 126.
    Information Commissioners Office: Privacy by design. Report. (2008)
  127. 127.
    Cavoukian, A., Taylor, S., Abrams, M.: Privacy by design: essential for organizational accountability and strong business practices. Identity Inf. Soc. 3(2), 405–413. (2010)
  128. 128.
    Cavoukian, A.: Privacy by design: origins, meaning, and prospects for assuring privacy and trust in the information era. In: Yee, G. (ed.) Privacy Protection Measures and Technologies in Business Organizations: Aspects and Standards, pp. 170–208. IGI Global, Hershey (2012)CrossRefGoogle Scholar
  129. 129.
    Camenisch, J., Fischer-Hübner, S., Rannenberg, K. (eds.): Privacy and Identity Management for Life. Springer, Heidelberg (2011)Google Scholar
  130. 130.
    Kamara, S., Lauter, K.: Cryptographic cloud storage. In: Financial Cloud and Data Security. LNCS, vol. 6054, pp. 136–149. Springer, Berlin (2010). doi:10.1007/978%973%97642%9714992%974_13 CrossRefGoogle Scholar
  131. 131.
    Gentry, C.: Fully homomorphic encryption using ideal lattices. In: 41st ACM Symposium on Theory of Computing (STOC), pp. 169–178. ACM, New York (2009)Google Scholar
  132. 132.
    Spiekermann, S., Cranor, L.F.: Engineering privacy. IEEE Trans. Software Eng. 35(1), 67–82, Jan/Feb (2009)Google Scholar

Recommended Reading

  1. Camenisch, J., Fischer-Hubner, S., Rannenberg, K. (eds.): Privacy and Identity Management for Life. Springer, Berlin (2011)Google Scholar
  2. Catteddu, D., Hogben, G. (eds.): Cloud computing: benefits, risks and recommendations for information security. ENISA Report. (2009)
  3. Cavoukian, A., Taylor, S., Abrams, M.: Privacy by design: essential for organizational accountability and strong business practices. Identity Inf. Soc. 3(2), 405–413. (2010)
  4. Cloud Security Alliance (CSA): Security Guidance for Critical Areas of Focus in Cloud Computing. v2.1, English language version, Dec. (2009)
  5. Cofta, P.: The trustworthy and trusted web. Foundations Trends Web Sci. 2(4), 243–381 (2011)CrossRefGoogle Scholar
  6. Craig, T., Ludloff, M.E.: Privacy and Big Data. O’Reilly, Sebastopol, CA (2011)Google Scholar
  7. Gellman, R.: Privacy in the clouds: risks to privacy and confidentiality from cloud computing. World Privacy Forum. (2009)
  8. Information Commissioners Office: Privacy by design. Report, Nov. (2008)
  9. Mather, T., Kumaraswamy, S., Latif, S.: Cloud Security and Privacy. O’Reilly, Sebastopol, CA (2009)Google Scholar
  10. Pearson, S.: Toward accountability in the cloud. IEEE Internet Comput., IEEE Comput. Soc. 15(4), 64–69, July/Aug (2011) CrossRefGoogle Scholar
  11. Pearson, S., Casassa Mont, M.: Sticky policies: an approach for privacy management across multiple parties. IEEE Comput. 44(9), 60–68, Sept (2011)CrossRefGoogle Scholar
  12. Schwartz, P.M.: Data Protection Law and the Ethical Use of Analytics, CIPL. (2010)
  13. Solove, D.J.: Nothing to Hide: The False Tradeoff between Privacy and Security. Yale University Press, New Haven (2011)Google Scholar
  14. The Royal Academy of Engineering: Dilemmas of Privacy and Surveillance: Challenges of Technological Change. Mar. (2007)
  15. Yee, G. (ed.): Privacy Protection Measures and Technologies in Business Organizations: Aspects and Standards. IGI Global, Hershey (2012)Google Scholar

Copyright information

© Springer-Verlag London 2013

Authors and Affiliations

  1. 1.Cloud and Security LabHP LabsBristolUK

Personalised recommendations