Improving Cloud Assurance and Transparency Through Accountability Mechanisms

Chapter
Part of the Computer Communications and Networks book series (CCN)

Abstract

Accountability is a critical prerequisite for effective governance and control of corporate and private data processed by cloud-based information technology services. This chapter clarifies how accountability tools and practices can enhance cloud assurance and transparency in a variety of ways. Relevant techniques and terminologies are presented, and a scenario is considered to illustrate the related issues. In addition, some related examples are provided involving cutting-edge research and development in fields like risk management, security and Privacy Level Agreements and continuous security monitoring. The provided arguments seek to justify the use of accountability-based approaches for providing an improved basis for consumers’ trust in cloud computing and thereby can benefit from the uptake of this technology.

Keywords

Accountability Assurance Cloud computing Continuous monitoring Privacy level agreement (PLA) Service level agreement (SLA) Transparency 

9.1 Introduction

The National Institute of Standards and Technology (NIST) defines cloud computing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction” [34]. This model enables a very significant change in the information technology (IT) landscape that can allow benefits to organisations including economies of scale, reduced spending on technology infrastructure and improved accessibility and flexibility. As a result, more and more organisations, in particular small- and medium-sized enterprises (SMEs), are using the cloud, either as end users or in order to become part of the service supply chain.

However, major perceived barriers to cloud usage are still the lack of transparency and assurance of the cloud service providers (CSPs). The lack of transparency can arise due to different reasons and be compounded by lack of knowledge and action by data controllers and data processors (e.g. about the chain processing, inadequate levels of data protection or illegality of transfers or the law applicable to data protection disputes) [17]. This is one aspect of the lack of accountability in the data protection context [18]: Accountability was seen in a recent report by the International Data Corporation (IDC) as the most important action that business users thought would help improve cloud adoption [27]. In order for potential cloud customers to make appropriate assessments about the suitability in a given context of using the cloud, and indeed particular cloud service providers, it is very important that these customers are provided with both the relevant information about the cloud service providers and an awareness of the type of things they should be assessing, which include not only organisational security risk aspects but also consideration of potential harm to data subjects and how that might be avoided, as well as wider regulatory and contractual compliance aspects. Furthermore, in the case of data breaches, notifications must be provided to a number of parties that may include data subjects, cloud customers and regulatory authorities, and enhanced transparency in this respect may affect not only follow-up actions relating to remediation and redress but also enhance future decision-making processes.

In this chapter we argue that more transparency and assurance can be achieved through an accountable-based approach where new processes, mechanisms and tools can be put in place. After assessing the state-of-the-art and current inadequacies in this regard in Sect. 9.2, we consider in Sect. 9.3 how accountability relates to transparency and assurance. In Sect. 9.4 we present a motivating cloud scenario with reference to which further discussions will be made. In Sects. 9.5, 9.6 and 9.7, we explain how greater transparency and assurance may be achieved via three core elements: a risk assessment approach adapted to cloud service providers (CSPs); improved levels of assurance through Service Level Agreements (SLAs) and Privacy Level Agreements (PLAs); transparency during the service provision, including usage of continuous monitoring. Furthermore, in Sect. 9.8, we present novel mechanisms that can be provided at different phases of an accountability lifecycle. These demonstrate how risk management and transparency can be enhanced in the provisioning for accountability phase, how continuous monitoring can be utilised in an operational phase and how audit and certification can be linked to policies such as PLAs. Finally, we present conclusions.

9.2 State of the Art

In this section we provide a brief introduction to related work in order to frame the context and contribution of our chapter.

Security information and event management (SIEM or SIM/SEM) solutions have a critical role in monitoring operational security and supporting organisations in decision making. They provide a standardised approach to collect information and events, store and query and provide degrees of correlations, usually driven by rules. The leading SIEM solutions in the market, as analysed by Gartner [32], are:
  1. 1.

    HP ArcSight1: This is oriented to large-scale security event management and offers appliance-based preconfigured monitoring of log data, management functions and reporting.

     
  2. 2.

    IBM Security QRadar2: This provides log management, event management, reporting and behavioural analysis for networks and applications.

     
  3. 3.

    LogRhythm3 and McAfee Enterprise Security Manager4: This can be deployed in smaller environments, supporting log management and network forensic capabilities.

     
  4. 4.

    EMC Corp. RSA Security Analytics5: This provides log and full packet data capture, security monitoring forensic investigation and big data analytic methods.

     

SIEM solutions do not cover business audit and strategic (security) risk assessment but instead provide inputs that need to be properly analysed and translated into a suitable format to be used by senior risk assessors and strategic policymakers. Risk assessment standards such as ISO 2700x,6 NIST [35], etc. operate at a macro level and usually do not fully utilise information coming from logging and auditing activities carried out by information technology (IT) operations. Similarly, there exist a number of frameworks for auditing a company’s IT controls, most notably COSO7 and COBIT.8

Other types of detective mechanism are concerned with cloud service usage rather than security and information monitoring. There exists a class of evidence-related cloud technologies that provide generic mechanisms supporting basic logging and monitoring. Examples are:
  • Sumo Logic9: This is a log management platform that allows a cloud provider to collect log data from applications, network, firewalls and intrusion detection system.

  • Amazon Web Services (AWS) CloudTrail10: This is a Web service that records AWS API calls for a customer’s account and delivers log files to the customer. CloudTrail records important information about each API call, including the name of the API, the identity of the caller, the time of the API call, the request parameters and the response elements returned by the AWS service. This information helps to track changes made to your AWS resources and to troubleshoot operational issues. The main purpose of CloudTrail is to make it easier to ensure compliance with internal policies and regulatory standards.

  • Logentries11: This collects and analyses logs across software stacks using a preprocessing layer to filter, correlate and visualise log data.

Different security controls can be identified that organisations need to implement in the cloud (see, e.g. [37]). From a management viewpoint, it is possible to identify critical processes (e.g. security risk assessment and privacy management) that address the mitigation of security and privacy threats [40], and the essential elements of a good organisational privacy management programme have been defined [42]. Furthermore, an accountability framework for the cloud context has been described [44, 45], in which different types of accountability tools (usable either individually or in combination) may be provided for cloud users, providers and governance entities, ranging from preventive tools that aid informed choice, control and decision making upfront and thereby can reduce privacy and security risk, to detective tools that monitor for and report policy violations, to corrective tools that ameliorate remediation and redress in case of failure. Certification can be an important aspect of such frameworks. ENISA has developed a Cloud Certification Schemes Metaframework (CCSM) that classifies different types of security certification (which are aligned with specific standards) for cloud providers [22]. This meta-framework is used to compare a number of different certifications identified within the Cloud Certification Schemes List (CCSL). The overall objective of this framework is to make the cloud more transparent for cloud customers, in particular, in the way cloud providers meet specific security objectives.

In general, technical security measures (such as open strong cryptography) can help prevent falsification of logs, and privacy-enhancing techniques and adequate access control should be used to protect personal information in logs. Relevant techniques that can be used include non-repudiable logs, backups, distributed logging, forward integrity via use of hash chains and automated tools for log audits. More specifically, in the work of [51], a collaborative monitoring mechanism is proposed for making multitenant platforms accountable. A third-party external service is used to provide a “supporting evidence collection” that contains evidence for Service Level Agreement (SLA) compliance checking (defined distinctively from runtime logs). This type of service is presented as an accountability service, in other words one that offers “a mechanism for clients to authenticate the correctness of the data and the execution of their business logic in a multitenant platform”. The external accountable service contains a Merkle B-tree structure with the hashes of the operation signatures concatenated with the new values of data after occurrence of state changes. This work includes algorithms for logging and request processes and an evaluation of a testing environment implemented in Amazon EC2. Furthermore, Butin et al. [5] propose a framework for accountability of practice using privacy-friendly logs. They take the approach of using formal methods to define the “accounts” and the accountability process to analyse these accounts. In the accountability process, the abstract events and logs are related, and the correctness is proved such that an abstract event denotes a concrete log, thus allowing analysis of the abstract events for compliance instead of a log. The compliance with respect to policies were formally checked and proved. One of the concerns as identified by the authors is to verify that the logs, which are the basis of any accountability system, reflect the actual and complete activities of the data controller. In general, their work models an accountability framework that is formally proved and verified which is in contrast to the approach described in this chapter, which is an implementation solving the problem of collecting evidence, analysing it, generating policy-based notifications/violations and generating audit reports.

9.3 The Relationship Between Accountability and Assurance

Accountability is the state of accepting allocated responsibilities (including usage and onward transfer of data to and from third parties), explaining and demonstrating compliance to stakeholders and remedying any failure to act properly. Responsibilities may be derived from law, social norms, agreements, organisational values and ethical obligations. Furthermore, accountability for complying with measures that give effect to practices articulated in given guidelines has been present in many core frameworks for privacy protection [43]. Within cloud ecosystems, accountability is becoming an important (new) notion, defining the relations between various stakeholders and their behaviours towards data in the cloud.

Specifically, an accountor is accountable to an accountee for:
  • Norms: the obligations and permissions that define data practices

  • Behaviour: the actual data processing behaviour of an organisation

  • Compliance: entails the comparison of an organisation’s actual behaviour with the norms

By the accountor exposing the norms it subscribes to and its behaviour, the accountee can check compliance. Norms can be expressed in policies and they derive from law, contracts and ethics. We consider further the definition and exposure of certain types of norms in a cloud context in Sect. 9.6 (focusing on SLAs).

Accountability and good systems design (in particular, to meet privacy and security requirements) are complementary, in that the latter provides mechanisms and controls that allow implementation of principles and standards, whereas accountability makes organisations responsible for providing an appropriate implementation for their business context and addresses what happens in case of failure (i.e. if the account is not provided and is not adequate; if the organisation’s obligations are not met, e.g. there is a data breach; etc.). Section 9.5 further elaborates the cloud-adapted risk assessment methodology targeting the elicitation of controls from an accountable perspective. Part of the justification that appropriate measures have been use comes from an enhanced risk assessment process. The role of a risk-based approach in data protection has been considered by a number of parties, including: as an assessment of the relative values of such an approach [4], modifying the data protection principles to take this into account [41], analysing the relationship with accountability [24] and recent regulatory analysis [7, 19].

Typically in a cloud ecosystem in a data protection context, the accountors are cloud actors which are organisations (or individuals with certain responsibilities within those) acting as a data steward for other actors’ personal data or business secrets. The accountees are other cloud actors, which may include private accountability agents, consumer organisations, the public at large and entities involved in governance. In addition, a connection between appropriateness and effectiveness can be made through the agreed SLA, which will contain committed security and privacy values relating to each metric selected, as discussed further in Sect. 9.6.

A core attribute of accountability is transparency, which is “the property of a system, organisation or individual of providing visibility of its governing norms, behaviour and compliance of behaviour to the norms” [23]. A distinction can be made between ex ante transparency, which is concerned with “the anticipation of consequences before data is actually disclosed (e.g. in the form of a certain behaviour)” [26] and ex post transparency, which is concerned with informing “about consequences if data already has been revealed” [26]. Being transparent is required not only with respect to the identified objects of the cloud ecosystem (i.e. norms, behaviour and compliance) but also with respect to remediation. In Sect. 9.7, we consider further how transparency may be provided during cloud service provision.

Although organisations can select from accountability mechanisms and tools in order to meet their context, the choice of such tools needs to be justified to external parties. A strong accountability approach would include moving beyond accountability of policies and procedures, to accountability of practice giving accountability evidence. Accountability evidence can be defined as “a collection of data, metadata, routine information, and formal operations performed on data and metadata, which provide attributable and verifiable account of the fulfilment (or not) of relevant obligations; it can be used to support an argument on the validity of claims about appropriate functioning of an observable system” [52].

Accountability evidence, as illustrated in Fig. 9.1, needs to be provided at a number of layers. At the organisational policy level, this would involve provision of evidence that the policies are appropriate for the context, which is typically what is done when privacy seals are issued. But this alone is rather weak; in addition, evidence can be provided about the measures, mechanisms and controls that are deployed and their configuration, to show that these are appropriate for the context. For example, evidence could be provided that privacy-enhancing technologies (PETs) have been used, to support anonymisation requirements expressed at the policy level. For higher-risk situations, continuous monitoring may be needed to provide evidence that what is claimed in the policies is actually being met in practice; even if this is not sophisticated, some form of checking the operational running and feeding this back into the accountability management programme in order to improve it is part of accountability practice, as described above, and hence evidence will need to be generated at this level too. In particular, technical measures should be deployed to enhance the integrity and authenticity of logs, and there should be enhanced reasoning about how these logs show whether or not data protection obligations have been fulfilled. The evidence from the above would be reflected in the account and would serve as a basis for verification and certification by independent, trusted entities. As we shall consider further in Sect. 9.7, the actual assessment of the effectiveness of the IT controls is performed during the service’s operation.
Fig. 9.1

Accountability evidence

Accountability evidence contributes towards the more general concept of assurance, which can be viewed as “Grounds for confidence that the other four security goals (integrity, availability, confidentiality, and accountability) have been adequately met by a specific implementation. “Adequately met” includes (1) functionality that performs correctly, (2) sufficient protection against unintentional errors (by users or software), and (3) sufficient resistance to intentional penetration or bypass” [48]. Different kinds of assurance need to be provided during the accountability lifecycle described in Sect. 9.8. In particular, as we shall consider further in Sect. 9.8, in an initial phase of provisioning for accountability, assurance (based, e.g. on assessment of capabilities) needs to be provided about the appropriateness and effectiveness of the cloud service providers under consideration; from an operational perspective, there needs to be both internal and external demonstration (to relevant stakeholders) that the organisation is operating in an accountable manner (e.g. built on monitoring evidence and via accounts); external parties will be involved in audit and validation based on information made available to them.

Before moving on to consider these aspects in more detail, we present an illustrative cloud scenario that will be used as a reference point.

9.4 Case Study

This section describes a scenario to motivate the main transparency-/assurance-related issues that confront cloud customers, in particular small- and medium-sized enterprises (SMEs) embracing the cloud.

“Wearable Co” is a manufacturer of wearable devices that needs to select a CSP to build a Web-based service on its behalf. This service should facilitate processing and storing wearable data while providing user-level functionalities, which will be consumed by the Wearable customers via a Web user interface (UI). The wearable data will be collected in two ways: automatic collection via the wearable devices and manual input from the customer via the Web application/UI. Part of the application will involve visualisation of aggregated statistics on maps. The service should be realised through one or more CSPs (i.e. a cloud service supply chain).

Wearable Co has concerns with respect to the implementation of the Wearable service in the cloud. These concerns are driven by the Data Protection Authority (DPA) observing the fulfilment of the legal framework and the type of personal data that should be collected. In this case, Wearable Co should ensure that all the personal data collected by the Wearable customers (either automatically or manually) is protected at all phases on their processing, while a specific set of geographical data centres should be considered for storing. At the same time, this SME should apply specific data access and data handling rules that should be enforced at runtime by all the involved stakeholders in this Wearable service.

Let us also consider CardioMon, which is a software-as-a-service (SaaS) CSP offering a complete solution by means of providing features for collecting, managing, storing and processing wellbeing data. CardioMon is doing business with many other customers with similar functional/business requirements as the Wearable Co and has an existing business agreement with DataSpacer, an infrastructure-as-a-service (IaaS) cloud provider, who offers advanced security and privacy mechanisms for the protected storage and retrieval of personal and sensitive information. Furthermore, the CardioMon service allows for the core functionality to be expanded via third-party services to enrich the experience of the users. Such an expansion is provided by Map-on-Web, which is a separate SaaS cloud provider, with expertise on map visualisations for big data sets. Thus, Map-on-Web complements CardioMon by expanding on the available data visualisation features.

The scenario of this use case assumes that (after a period of researching the market) Wearable Co selects CardioMon as the provider of the software service that it will make available to its customers. A high-level system model of the described case study is shown in Fig. 9.2.
Fig. 9.2

Use case scenario – system model and involved actors

The next three sections of the chapter will elaborate on the assurance and transparency issues related to this use case along with the proposed solutions from the risk assessment (Sect. 9.5), Service Level Agreement (Sect. 9.6) and operational provisioning of assurance (Sect. 9.7) perspectives.

9.5 Risk Assessment

Organisations aiming to use the benefits of the cloud (like the Wearable Co introduced in Sect. 9.4) are in need of mechanisms to implement good-enough information security and data protection. These SMEs often find it convenient to start with an introspective view that identifies both the assets to protect and the risks to consider when migrating to the cloud. Risk, as explained in risk management frameworks (RMFs) such as NIST SP 800-30 [35], is strictly tied to uncertainty, and most approaches to manage risk are based on the probability of an event happening. However, cloud security/privacy requirements reflect intrinsic security problems not seen in regular IT security scenarios. Current risk assessment methods are not tailored to cloud computing: The lack of transparency about how cloud service ecosystems are composed prevents the seamless application of traditional methodologies and standards. While the future Internet creates new business opportunities, it also creates a variety of new risks as connectivity and the multidomains created by trust and organisational boundaries increase. Because of its set-up, scenarios like the one shown in Fig. 9.2 create several types of technical, organisational and regulatory “complexities” and risks. Take, for example, the following:
  • Availability issues in any of the involved CSPs, resulting in the full/partial unavailability of the Wearable service

  • Insider attacks on the IaaS provider (DataSpacer) resulting on the customers’ personal identifiable information (PII) being leaked

  • Vendor lock-in issues forbidding Wearable Co to migrate their business to a different cloud supply chain

  • Lack of service transparency (in particular related to data handling), resulting in customers being unaware of where their (own) data is being stored and processed

As IT functions are spread across the cloud, companies like Wearable Co will need not only event monitoring systems that cross cloud boundaries but also assurance systems that demonstrate that each CSP is enforcing their required policies and that the combination adequately manages risk. However, the missing link amongst these objectives is a common model to allow assessment of risk based on some trust assumptions and how to use such representations to derive contracts, policies and security/privacy controls that will enable accountability.

The type of cloud delivery model and the service type that are chosen for adoption, in association with security controls selected for the ecosystem, need to be chosen in such a way that the system preserves its security posture. Therefore, a properly performed risk management cycle should ensure that the residual risk remaining after securing the ecosystem is minimal and that the system achieves a security posture equivalent to that of an on-premise technology architecture or solution (in-house Wearable solution). Conversely, the type of selected deployment model has an impact on the distribution of security responsibilities amongst the cloud actors, as related to the security conservation principle [37].

Despite the variety of approaches in cloud risk management derived from relevant works [36], the challenges associated with complex supply chains (from a risk management perspective) have only recently resulted in new initiatives to address them. The key element for the successful adoption of assurance and transparency in a cloud-based system solution is the cloud consumers’ full understanding of the cloud-specific traits and characteristics, the architectural components for each cloud service type and deployment model, along with each cloud actor’s precise role in orchestrating a secure ecosystem. How confident cloud customers feel if the risk related to using cloud services is acceptable depends on how much trust they place in those involved in the surrounding cloud ecosystem’s orchestration. The risk management process ensures that issues are identified and mitigated early on in the investment cycle and followed by periodic reviews. Since cloud customers and other cloud actors involved in securely orchestrating a cloud ecosystem (cf. Fig. 9.2) have differing degrees of control over cloud-based IT resources, they need to share the responsibility of implementing and continuously assessing the security requirements.

Furthermore, it is essential for the cloud consumers’ business and mission-critical processes that they be able to identify all cloud-specific risk-adjusted security and privacy controls. Cloud consumers need to leverage their contractual agreements to hold the CSPs accountable for the implementation of the security and data protection controls. They also need to be able to assess the correct implementation and continuously monitor all identified security controls. But what are the elements of a successful cloud risk management strategy in order to enable transparency and assurance? The draft NIST SP 800-173, Cloud-Adapted Risk Management Framework (CRMF) [39], is one of the most relevant works addressing this issue.

The CRMF was first highlighted in NIST SP 500-299, Cloud Computing Security Reference Architecture [37]. This specification discusses several key aspects of managing risks associated with a cloud environment while stressing the importance of adhering to the security conservation principle. CRMF is a cyclically executed process composed of a set of coordinated activities for overseeing and controlling risks. This set of activities consists of:
  • Risk assessment

  • Risk treatment

  • Risk control

These activities collectively target the enhancement of strategic and tactical cloud security and privacy in scenarios like the Wearable Co. The NIST Cloud-Adapted Risk Management Framework (CRMF) provides a consumer-centric approach while closely following the original RMF, identifying the six steps shown in Table 9.1.
Table 9.1

Managing risks in the Wearable Co scenario

CRMF activity

CRMF step

Wearable Co. implementation

Risk assessment

Step 1 – Conduct an impact analysis to categorise the information system that has been migrated to the cloud and the information that is processed, stored and transmitted by that system

This is carried by the Wearable Co itself, usually during the design phase of the service. The customers’ PII must be clearly identified in accordance to applicable regulations

Step 2 – Identify the security and privacy requirements of the system by performing a risk assessment. Select the baseline and tailored supplemental security and privacy controls

Wearable Co might decide to perform the assessment considering the risk classification proposed by ENISA [21]. Security and privacy controls can be mapped to the Cloud Security Alliance (CSA) Cloud Control Matrix (CCM [9]) and Privacy Level Agreements (PLAs [12]) best practices

Risk treatment

Step 3 – Select the cloud ecosystem architecture that best suits the assessment results for the system

Wearable Co should decide on both the cloud deployment and cloud service model to use (i.e. the supply chain shown in Fig. 9.2)

Step 4 – Assess the service provider options. Identify the security controls needed for the system the cloud provider has implemented. Negotiate the implementation of any additional security controls that are identified. Identify any remaining security controls that fall under the cloud consumer’s responsibility for their implementation

In order to make a well-informed decision based on elicited security and privacy controls (cf. Step 2), the Wearable Co might decide to search and compare CSP offers based on publicly available information. Repositories like the CSA STAR [11] and information contained in applicable certifications and Service Level Agreements (SLAs) will prove useful during this stage. Section 9.6 will further expand on these topics

Risk control

Step 5 – Select and authorise a cloud provider to host the cloud consumer’s information system. Draft a Service Level Agreement that lists the negotiated contractual terms and conditions

Wearable Co agrees on the SLAs with the different CSPs participating in the supply chain, i.e. CardioMon, Map-on-Web and DataSpacer, in Fig. 9.2. Assurance and transparency in relationship to cloud SLAs will be further explained in the next section

Step 6 – Monitor the cloud provider to ensure that all Service Level Agreement terms are being met. Ensure that the cloud-based system maintains the necessary security and privacy posture. Monitor the security controls that fall under the cloud consumer’s responsibility

This is an essential stage to fully close the assurance and transparency lifecycle. Wearable Co (and also other CSPs) should need to deploy the required continuous monitoring/certification mechanisms to guarantee the fulfilment of its security and privacy requirements during the service operation. This will be further analysed in Sect. 9.7

The described risk-based approach to managing information systems is a holistic activity that should be fully integrated into every aspect of the Wearable Co scenario, from planning and system development lifecycle processes (Steps 1–2) to security/privacy controls allocation (Steps 3–5). The selection and specification of security and privacy controls support effectiveness, efficiency and constraints via appropriate laws, directives, policies, standards and regulations. The resulting set of security and privacy controls (baseline, tailored controls, controls inherited from the supply chain and under customer’s direct implementation and management) derived from applying the CRMF (Steps 1–4) lead gradually to the creation of the applicable cloud SLA in the CRMF’s Step 5, as explained next.

9.6 Service Level Agreements

A lack of assurance and transparency, along with the current paucity of techniques to quantify security and privacy levels, often results in cloud customers (in particular SMEs like the Wearable Co in Fig. 9.2) being unable to assess the security of the CSPs they are paying for. Despite the advocated economic and performance related advantages of the use case presented in Sect. 9.4, two issues arise:
  1. 1.

    How can the (non-security expert) Wearable Co meaningfully assess if the CSPs in the supply chain fulfil their security/privacy requirements?

     
  2. 2.

    How do all CSPs (including the Wearable Co itself) provide assurance and transparency during the full cloud service lifecycle?

     

This section will focus on exploring the first issue and how it is being solved using the state of the art through the use of security and privacy parameters in cloud SLAs. Operational assurance/transparency will be discussed in Sect. 9.7.

With regard to the implementation of the elicited security and privacy controls (cf. Step 1 in Table 9.1), the CSPs within the supply chain can only assume the type of data the Wearable Co customer will generate and use during the operational phase of the cloud service; therefore, the CSPs are not aware of the additional security and privacy requirements and tailored controls deemed necessary to protect the Wearable Co’s customer PII. Customers require the mechanisms and tools that enable them to understand and assess what “good-enough security” means in the cloud. This requirement is critical when assessing if, for example, the Wearable Co security and privacy requirements are being fulfilled by the controls and certifications implemented by the selected CSPs.

Fortunately, stakeholders in the cloud community have identified that specifying security and privacy parameters in Service Level Agreements (termed as secSLA and PLA, respectively, in the rest of this chapter) is useful to establish a common semantics to provide and manage security assurance from two perspectives, namely, (i) the security/privacy level being offered by a CSP and (ii) the security/privacy level requested by a cloud customer.

In order to develop the full context of the value of secSLAs and PLAs for our case study, we introduce next the rationale of SLA usage along with the basic vocabulary.

9.6.1 Importance of secSLAs and PLAs for Cloud Transparency

Contracts and Service Level Agreements (SLAs) are key components defining cloud services. According to the ETSI Cloud Standards Coordination group [20], SLAs should facilitate cloud customers in understanding what is being claimed for the cloud service and in relating such claims to their own requirements. Naturally, better assessments and informed user decisions will also increase trust and transparency between cloud customers and CSPs.

A recent report from the European Commission [15] considers SLAs as the dominant means for CSPs to establish their credibility, attract or retain cloud customers since they can be used as a mechanism for service differentiation in the CSP market. This report suggests a standardised SLA specification aiming to achieve the full potential of SLAs, so all cloud customers can understand what is being claimed for the cloud service and relate those claims to their own requirements.

At the SecureCloud 2014 event [13], the Cloud Security Alliance (CSA) compiled and launched an online survey to better understand the current usage and needs of European cloud customers and CSPs related to SLAs. Almost 200 equally balanced cloud customer and CSP responders (80 % from the private sector, 15 % from the public sector and 5 % from other backgrounds) provided some initial findings on the use of standardised cloud SLAs. Respondents ranked the two top reasons why cloud SLAs are important as (1) being able “to better understand the level of security and data protection offered by the CSP” (41 %) and (2) “to monitor the CSP’s performance and security and privacy levels” (35 %). Furthermore, based on the respondents’ experiences, the key issues needed to make cloud SLAs “more usable” for cloud customers highlighted (1) the need for “clear SLO [Service Level Objective] metrics and measurements” in first place (66 %), (2) “making the SLAs easy to understand for different audiences (managers, technical legal staff, etc.)” in second place (62 %), (3) “having common/standardised vocabularies” (58 %) in third place and (4) “clear notions of/maturity of SLAs for Security and Privacy” (52 %) in fourth place. These responses are empirical indicators of the need to develop the field of cloud secSLAs and PLAs and the techniques to reason about them.

9.6.2 How Are secSLAs and PLAs Structured?

This section summarises the basic cloud secSLA/PLA terminology, based (where applicable) on the latest version of the relevant ISO/IEC 19086 standard [28] and the CSA PLA initiative [12]. A cloud SLA is a documented agreement between the CSP and the customer that identifies cloud services and SLOs, which are the targets for service levels that the CSP agrees to meet. If a SLO defined in the cloud SLA is not met, the cloud customer may request a remedy (e.g. financial compensation). If the SLOs cannot be (quantitatively) evaluated, then it is not possible for customers or CSPs to assess if the agreed SLA is being fulfilled. This is particularly critical in the case of secSLAs and PLAs, but there is also an open challenge on how to define useful (and quantifiable) security and privacy SLOs.

In general, a SLO is composed of one or more metrics (either quantitative or qualitative), where the SLO metrics are used to set the boundaries and margins of errors CSPs have to abide by (along with their limitations). Considering factors such as the advocated familiarity of practitioners with security/privacy control frameworks (e.g. ISO/IEC 27002 [29], CSA CCM [9] and CICA [3]), the relevant workgroups (e.g. the European Commission (EC) Cloud Select Industry Group on Service Level Agreements (C-SIG SLA) [16]) have proposed an approach that iteratively refines individual controls into one or more measurable security SLOs. The elicited SLO metrics can then be mapped into a conceptual model (such as the one proposed by the members of the NIST Public RATAX Working Group [38]), in order to fully define them. Cloud secSLAs and PLAs are typically modelled using the hierarchical structure shown in Fig. 9.3. The root of the structure defines the main container for the secSLA/PLA. The second and third levels represent the control category and control group, respectively, and they are the main link to the security/privacy framework used by the CSP. The lowest level in the SLA structure represents the actual SLOs committed by the CSP, in which threshold values are specified in terms of security and privacy metrics.
Fig. 9.3

Example of a cloud secSLA being derived from a security control framework

Next we will develop an example related to secSLAs, but the same methodological approach is also applicable to PLAs. In Fig. 9.3, let us suppose that a CSP implements the secSLA Control “Entitlement (i.e. EKM-01)” from the CSA CCM [9]. As observed in the figure, this control is actually contained within the group “Encryption and Key Management (i.e. EKM)”. After selecting EKM-01, the same CSP then refers to the SLO list provided on the C-SIG SLA report [16] (or any other relevant standard) and finds out that two different SLOs are associated with control EKM-01, i.e. “cryptographic brute force resistance” and “hardware module protection level”. Both SLOs are then refined by the CSP into one of more security metrics, which are then specified as part of the secSLA offered to the cloud customer. For example, a CSP can commit to a “cryptographic brute force resistance” measured through security levels such as level1–level8 or through a metric called “FIPS compliance” defined as Boolean YES/NO values. Therefore, the secSLA could specify two SLOs: (cryptographic brute force resistance = level4) and (FIPS compliance = YES). If any of these committed values is not fulfilled by the CSP, then the secSLA is violated, and the customer might receive some compensation (this is the so-called SLA remediation process).

Using the presented approach, the security and privacy SLOs proposed by the CSP can be matched to the cloud customer’s requirements before acquiring a cloud service. Actually, these SLOs provide a common semantics that both customers and CSPs can ultimately use to automate the management of cloud secSLAs and PLAs during the service provision. This will be further elaborated in the next section.

In our Wearable Co scenario, it is important to highlight that it does not suffice to understand how the Wearable service may affect its own customers, but one also needs to consider how the sub-services in the supply chain (i.e. CardioMon, Map-on-Web and DataSpacer in Fig. 9.2) contribute to the overall security and privacy levels. Hence, there is a distinct need for aggregation of metrics guaranteed by individual cloud services in order to get the values for a composite one. While practitioners have acknowledged the challenges associated with the composition of security (and privacy) metrics long before the “cloud times” [30], nowadays this topic is still mostly unexplored in cloud systems.

9.7 Continuous Assurance During Service Provision

The CRMF process shown in Table 9.1 highlights the fact that better levels of transparency can be achieved if the CSP continuously provides the expected assurance levels to the customer, during the whole provision of the cloud service. Based on the discussion presented in Sect. 9.6, the notion of continuous assurance can be related to reassessing the risk levels achieved during the operation of the service through analysing the compliance with agreed secSLAs and PLAs.

Let us take as a starting point the use case presented in Fig. 9.2, supposing that Wearable Co has agreed and signed SLAs (i.e. secSLAs and PLAs) with all the CSPs in the supply chain (i.e. CardioMon, Map-on-Web and DataSpacer). According to current practice, SLAs are signed by peers, which means that in our use case Wearable Co should have signed three different SLAs. However, in a real-world scenario, it might be also common for involved CSPs to sign SLAs, e.g. one SLA between CardioMon and DataSpacer and other between Map-on-Web and DataSpacer. This discussion is important to establish the way in which signed SLAs will be continuously assessed during the operation of the Wearable service. In Fig. 9.2, despite the fact that the security and privacy levels depend on several components of the supply chain, the responsibility of reporting these levels of overall service remains that of the “primary” CSP at the end of the supply chain: the CSP that directly faces the customer (i.e. Wearable Co). It is the responsibility for this “primary” CSP to gain assurance about the “secondary” CSPs involved in the supply chain. If the “primary” CSP uses a continuous monitoring mechanism to exchange information with customers, and if this “primary” CSP uses the same mechanism to exchange data with “secondary” CSPs, then it may choose to “proxy” some information originating from the supply chain back to the customer. This proxy approach is useful to automate the discovery of all entities in the supply chain. Providing visibility on the supply chain is often considered as a compliance requirement for CSPs with regard to EU data protection rules. Later on this section, we will discuss the CSA Cloud Trust Protocol (CTP), a mechanism than can be implemented to automate this process.

However, which elements should be assessed on a secSLA/PLA? Which mechanisms are available to continuously monitor cloud assurance? Once a cloud secSLA/PLA is built and agreed with the CSP, the customer now has a baseline to monitor the fulfilment of the agreed SLOs. The SLA can be evaluated through (i) the analysis of the fulfilment of agreed security SLOs and (ii) the identification of potential deviations from expected values (i.e. SLA violations). Intuitively, these violations can be managed by the CSP through actions ranging from changes to the current secSLA/PLA to termination of the agreed cloud service. Continuous assessment of agreed SLAs should consider on the one hand the hierarchical organisation of the SLOs (cf. Fig. 9.3) and their quantitative/qualitative nature. Both challenges have just started to be studied by the academic community [33], and we foresee the midterm adoption of these approaches in real-world deployments.

From the continuous monitoring mechanism perspective, despite the apparent feasibility of the control/monitoring approach, to the best of our knowledge, there are very few efforts exploring this area. One of the recent developments in the area of continuous monitoring is CSA’s Cloud Trust Protocol (CTP) [10] which is an open API to enable cloud customers to query CSPs about the security/privacy levels of their services. A key design choice that has shaped CTP is the focus on the monitoring of secSLAs/PLAs, rather than the pure monitoring of security/privacy controls. The CTP API is designed to be a RESTful protocol that cloud customers can use to query a CSP on current security/privacy attributes related to a cloud service such as the current level of availability or information on the last vulnerability assessment. This can be done in a classical query-response approach, but the CTP API also has the ability to specify event triggers on the CSP that may optionally be reported in push mode to a specific customer. These triggers allow the cloud customers to be notified of important security/privacy events that occurred in near real time. The CTP API additionally provides access to a log facility that can be used to store and access security events generated by triggers.

It is important to emphasise that CTP mainly proposes a unified standardised API to present measurement results to improve cloud transparency and assurance. As such, the CTP API does not cover the actual monitoring infrastructure and related technologies that are used to gather, store and analyse events in order to produce these measurement results. A high-level view of CTP’s system model is shown in Fig. 9.4.
Fig. 9.4

Simplified system model of the CSA Cloud Trust Protocol

At the state of the art, CSA CTP is the main enabler of the upcoming continuous monitoring procedure to be implemented by the certification scheme called the Open Certification Framework (OCF) [11].

9.8 Example Tools

In this section we present how some of our tools developed within the Cloud Accountability Project [8] have been implemented to solve part of the identified issues, linking to phases within the accountability lifecycle. The organisational lifecycle for accountability as described in the A4Cloud Reference Architecture [25] can be summarised into three major accountability lifecycle phases:
  • Phase 1 – Provisioning for Accountability: works on risk identification (based on business impact, not just technology), control identification, control implementation design and control implementation through technology and processes. Examples include the Cloud Offerings Advisory Tool (COAT) and Data Protection Impact Assessment Tool (DPIAT) described in Sect. 9.8.1.

  • Phase 2 – Operating in an Accountable Manner: corresponds to the operational (production) phase of the solution and includes all the associated management processes. The Audit Agent System (AAS) monitors the infrastructure and collects policy-based evidence to prove that the cloud provider operates the infrastructure in an accountable manner. This tool is presented in Sect. 9.8.2.

  • Phase 3 – Audit and Validate: corresponds to the assessment of the effectiveness of the controls which have been deployed, the necessary reporting, and paves the way to the tuning (adaption) of the measures deployed to ensure that obligations are being met. Based on the evidence collection, the Audit Agent System (AAS) generates policy violations and audit reports, as discussed in Sect. 9.8.3.

9.8.1 Phase 1: Provisioning for Accountability

The A4Cloud toolkit contains tools which address the need for support in managing risk and cloud service contract selection in the context of accountability for data stewardship in the cloud. Tools in this area serve a preventive role, by means of:
  1. 1.

    Evaluation of cloud offerings and contract terms with the goal of enabling a more educated decision making on which service and service provider to select

     
  2. 2.

    Assessment of the risks associated with the proposed usage of cloud computing, involving personal and/or confidential data and elicitation of actionable information and guidance on how to mitigate them

     

These two mechanisms are being developed as the following distinct tools that may be used separately or in combination.

9.8.1.1 Cloud Offerings Advisory Tool (COAT)

COAT is a cloud brokerage tool that allows potential cloud customers (with a focus on end users and SMEs) to make informed choices about data protection, privacy, compliance and governance, based upon making the cloud contracts more transparent to cloud customers. A number of related factors vary across cloud providers and are reflected in the contracts, for example, subcontracting, location of data centre, use restriction, applicable law, data backup, encryption, remedies, storage period, monitoring/audits, breach notification, demonstration of compliance, dispute resolution, intellectual property rights on user content, data portability, law enforcement access and data deletion and retention. The focus of the tool is on providing feedback and advice related to properties that reflect compliance with regulatory obligations rather than providing feedback on qualitative performance aspects (such as availability), although potentially the tool could be integrated with other tools that offer the latter.

A Web user interface enables interaction with the target users. During this interaction, potential cloud customers can provide as input to the graphical interface a collection of answers to a questionnaire, but most of this information is optional and need not be provided although the interactions help guide the users as to their needs and provide a more targeted output. Such information includes the data location, the roles involved in the scenario to be built on the cloud, contact details of those responsible for defining the purpose of use for the involved data, contextual information about the environment setting and the user needs and requirements. Other knowledge used by the system includes the cloud service offerings in structured form, models of cloud contracts and points of attention and reputation information with respect to the agents involved in the offering process. During this process of interaction, guidance is provided on privacy and security aspects to pay attention to when comparing the terms of cloud service offerings. The outcome of COAT is an immediate and dynamically changeable report, including an overview of compatible service offerings matching the user requirements and links to further information and analysis. See Fig. 9.5 for an example.
Fig. 9.5

Example COAT screenshot

Ongoing research involves usage of ontologies for more sophisticated reasoning and linkage to PLA terms and usage of maturity and reputational models to optimise ordering of the outputs. For further information about the system, see [1].

9.8.1.2 Data Protection Impact Assessment Tool (DPIAT)

DPIAT is a tool that assesses the proposed use of cloud services, helping users to understand, assess and select CSPs that offer acceptable standards in terms of data protection. The tool is tailored to satisfy the needs of SMEs that intend to process personal data in the cloud; it guides them through the impact assessment and educates them about personal data protection risks, taking into account specific cloud risk scenarios. The approach is based on legal and socio-economic analysis of privacy issues for cloud deployments and takes into consideration the new requirements put forward in the proposed European Union (EU) General Data Protection Regulation (GDPR) [14], which introduces a new obligation on data controllers and/or processors to carry out a Data Protection Impact Assessment prior to risky processing operations (although the requirements differ slightly across the various drafts of the regulation).

Figure 9.6 shows the high-level approach of DPIAT. The assessment is based on input about the business context gathered within successive questionnaires for an initial screening and for a full screening for a given project, assessed using Drools rules [31], combined with risk assessment of cloud offerings [6] based upon information generated voluntarily by CSPs, and collected from the CSA Security, Trust and Assurance Registry (STAR) [11]. This risk assessment approach is aligned to the methodology presented in Sect. 9.5.
Fig. 9.6

The high-level approach of the Data Protection Impact Assessment Tool

The output of the first phase of the DPIAT reflects advice about whether to proceed to the second phase of assessment. The second phase questionnaire contains a set of 50 questions. The output of this phase is a report that includes: the data protection risk profile, assistance in deciding whether to proceed or not and the context of usage of this tool within a wider DPIA process. Amongst other things, the tool is able to demonstrate the effectiveness and appropriateness of the implemented practices of a cloud provider helping him to target resources in the most efficient manner to reduce risks. The report from this phase contains three sections. The first, project-based risk assessment, is based on the answers to the questionnaire and contains the risk level associated with sensitivity, compliance, transborder data flow, transparency, data control, security and data sharing. The second part displays risks associated with the security controls used by the CSP. It contains the 35 ENISA risk categories [21] with their associated quantitative and qualitative assessments. The last section highlights additional information that the user needs to know related to requirements associated with GDPR article 33 [14]. The system also logs the offered advice and the user’s decision for accountability purposes. For further information about the system, see [2].

9.8.2 Phase 2: Operating in an Accountable Manner

The Audit Agent System (AAS) is used to prove that the cloud infrastructure is operated in an accountable manner, by collecting evidence to capture the relevant information for proving accountability. The AAS is associated with policy monitoring, which sniffs the cloud ecosystem transactions to verify that policy configuration is followed in the service chain and appropriate notifications are alerted when data usage is not compliant to contracts.

The evidence collection process builds an information base, which includes the collection of operational evidence (how data is processed in the system demonstrated by logs and other monitoring information), documented evidence (documentation for procedures, standards, policies), configuration evidence (are systems configures as expected), accountability controls, deployed accountability tools and correct implementation of an accountability process. Evidence is not collected purposelessly but requires a distinct reason. This reason is defined in a policy, which is directly mapped to an accountability obligation for which the compliance status shall be checked.

There are various evidence sources to be considered, such as logs, cryptographic proofs, documentation and many more. For each, there needs to be a suitable collection mechanism. For instance, there is a log parser for logs, a cryptographic tool for cryptographic proofs or a file retriever for documentation. This is done by the AAS software agent called the Evidence Collection Agent that is specifically developed for data collection from the corresponding evidence source. Another type of collection agent has client APIs implemented to interface with more complex tools, such as Cloud Management Systems (CMS), that is, one of the major evidence collection sources for cloud resource usage, access rights, configurations, resource provisioning, virtual machine (VM) locations, etc. Evidence Collection Agents can be deployed at different cloud architectural layers (i.e. network, host, hypervisor, IaaS, platform-as-a-service (PaaS), SaaS), with the purpose of collecting, processing and aggregating evidence for enabling validation of the account. Generally, these agents receive or collect information as input and translate that information into an evidence record, before storing it in the Evidence Store. Agent technology helps to ensure extensibility by allowing easy introduction of new evidence sources by adding new collection agents. This approach also allows AAS to address rapid infrastructure changes, which are very common in cloud infrastructures by easily deploying and destroying agents when needed.

Remediation actions that have to be performed after some policy has been violated, for instance, often rely on fine-grained monitoring facilities and extensive analysis capabilities of the resulting evidence.

The AAS tool provides suitable means for the runtime monitoring of cloud applications and infrastructures, the verification of audit policies against the collected data and the reporting of policy violations along with the evidence supporting it (for more details about the architecture, see Sect. 9.8.4).

9.8.3 Phase 3: Audit and Validate

The cloud audit process implemented by AAS is comprised of two main processes: First evidence has to be collected, as described in the previous section, which includes required information to conduct audits. Second, audits in general can be performed periodically, on-demand or continuously. One of the major problems of periodical audits in cloud computing is the dynamic change of the infrastructure, and therefore there is a risk of missing critical violations or incidents if the interval is too big.

With respect to cloud audits, we have implemented the following audit processes:
  1. 1.

    Planning Phase: Audit policies are derived from the input policy (e.g. an A-PPL policy), which form an automatic audit plan. Audit tasks define the evidence collection and steps for analysis, i.e. whose evidence has to be collected and how it should be analysed.

     
  2. 2.

    Securing Phase: Installation of evidence collection for audit trail collection. Evidence is collected from the evidence sources according to what has been defined in phase 1.

     
  3. 3.

    Analysis Phase: Automatic evaluation of the collected evidence according to the defined policies, which results in a statement about (non)compliance with supporting evidence for that claim.

     
  4. 4.

    Presentation Phase: Presentation in an audit dashboard and/or generation of a human-readable document, which includes all processed audit tasks including their results.

     
Figure 9.7 depicts these different phases of auditing. An audit policy serves as the main input to the audit process, where collected evidence is analysed. As a result, an audit report is generated, which can take the form of a Web-based dashboard presenting policy violations or a notification of other components about policy compliance and violations. AAS is used by auditors, who may act on behalf of the cloud customer/data subject (external view) or the cloud provider (internal view) to perform continuous and periodic audits. The goals and nature of policies which are audited may differ depending on the view. The view may also differ in case of a trusted third-party auditor (TPA), who is independent from the customer and provider, but acting on behalf of any of those.
Fig. 9.7

Audit process

9.8.4 Architecture of the Audit Agent System

The AAS architecture comprises the six major modules shown in Fig. 9.8.
Fig. 9.8

Audit agent system architecture overview

The six AAS modules can roughly be divided into four major parts:
  1. 1.

    Input: Audit Policy Module (APM)

     
  2. 2.

    Runtime Management: Audit Agent Controller (ACC)

     
  3. 3.

    Collection and Storage: Evidence Collection Agents, Evidence Store

     
  4. 4.

    Processing and Presentation: Evidence Processor, Presenter

     

These are now considered in turn.

9.8.4.1 Input: Audit Policy Module (APM)

The Audit Policy Module (APM) is the main component for handling input to the AAS. Typically, obligations, access control requirements and other types of policies define how a cloud service is supposed to handle data. To gather evidence about the compliance with or violation of these policies is part of the AAS. In the APM, machine-readable input policies are parsed, and evidence collection tasks and evidence-processing tasks are extracted. The main assumption in this parsing process is that this will not be fully automatable. Therefore, additional information is provided by the auditor. Depending on the actual audit task, this includes infrastructure-specific information such as:
  • Specifics of the evidence source (IP addresses, Java Agent Development Environment (JADE) agent platform [49], REST endpoints)

  • Specifics of the monitored service (path to log files, files to monitor for changes)

  • Required credentials (authentication strings, usernames, passwords)

  • Audit type (periodic, continuous, one-time)

9.8.4.2 Runtime Management: Audit Agent Controller (AAC)

The Audit Agent Controller (AAC) is the main runtime management component. At the core of AAS, it is responsible for orchestrating audits and agents according to what has been previously defined in the APM. The typical audit lifecycle is as follows:
  1. 1.

    According to the input provided by the APM, AAC creates and configures audit policies, its tasks and corresponding collection and processing agents.

     
  2. 2.

    Agents are migrated between the core platform and target platforms (near the evidence source).

     
  3. 3.

    During the agents’ lifetime, the AAC monitors registered platforms and registered agents, handles exceptions and manages the creation, archival and deletion of evidence stores.

     

The AAC uses the JADE Agent Communication Language (ACL) [50] for internal communication between agents. Therefore, the AAC sits at the core of the AAS and manages all operations regarding the orchestration of collection and processing agents, as well as maintaining the Evidence Store. Most notably, the AAC uses UDP-based monitoring of the various agents to ensure a consistent and smooth operation of the AAS.

9.8.4.3 Collection and Storage: Evidence Collection Agents and Evidence Store

The Evidence Collection Agent reads raw evidential data from the source and generates evidence records that are sent to the Evidence Store. The Evidence Store is implemented using transparency log (TL) [46, 47]. Since TL functions as a key-value store for storing evidence records (encrypted messages identified by a key), NoSQL or RDBMS-based back ends for persisting evidence records can be used. All data contained in the Evidence Store is encrypted. The evidence records are encrypted on a per audit task basis, which means only the Audit Policy Agent corresponding to the collection agents is able to decrypt the evidence records for further processing. Isolation between tenants in a single Evidence Store is achieved by providing one container for each tenant where his evidence records are stored. However, even stronger isolation by providing a separate Evidence Store hosted on a separate VM is also possible with this approach.

9.8.4.4 Processing and Presentation: Evidence Processor and Presenter

The processing or analysis of evidence consists of two steps:
  1. 1.

    Retrieval of the appropriated collected information from the Evidence Store (which must be policy/audit task based)

     
  2. 2.

    A verification process, which checks the correctness of recorded events according to defined obligations and authorisations

     

These procedures are inherently dependent on the type of audit task. There can be specific audit tasks defining a single or a small set of checks to be performed (e.g. availability of VMs, results of a proof of retrievability (PoR), etc.) or more complex compliance over time periods (e.g. monthly checks of policy compliance). According to the complexity of the task, due to the number of obligations, or the volume of evidence to analyse, different verification processes may need to be considered, ranging from log mining, checking for predefined tokens or patterns, to automated analysers and automated reasoning upon the audit trail.

For the situations where the audit task consists of defined checks, the Evidence Store is accessed, and the required logs (or other elements) are identified in the related evidence records. More complex compliance checks will involve the retrieval of evidence records covering given periods of time or specifically related to a policy identifier.

The outputs of any audit, including report, notification alerts and messages of non-compliance, are then processed for presentation.

There are two main ways of evidence presentation in AAS. The A-PPL-E Notification Agent is designed to generate violation notification messages, which are consumed by other A4Cloud tools, to reported violation according to what is defined in the A-PPL policy.

The second presenter in AAS is the Web-based dashboard (see Fig. 9.9). The auditor uses the dashboard as the main way of interaction with AAS. Most importantly, audit results are displayed to the auditor, which provides an immediate overview of the current compliance status. The main contact point with the system is the audit dashboard which is a Web application implemented using Bootstrap.
Fig. 9.9

AAS dashboard

Automated Incident Detection: Collected evidence serves as the basis for policy violation detection in AAS. Audit agents monitor collected evidence records and generate violation or compliance notifications. There are three modes an audit agent can run in:
  1. 1.

    Continuous: In continuous mode, the audit agent evaluates evidence records as soon as they are generated by the collection agent. The continuous audit mode is very similar to monitoring with immediate notification if a violation is detected. The time between evidence about a violation or incident being recorded and actual detection and notification is minimal in this scenario. However, since evidence is analysed on the fly, more complex evidence analysis that relies on taking a whole series of records into account is generally harder to implement.

     
  2. 2.

    Periodic: In periodic mode, the audit agent evaluates evidence records at specific intervals (e.g. hourly, daily, weekly, etc.).

     
  3. 3.

    One-time: In one-time mode, the audit agents, collection agents and the corresponding evidence records are archived immediately after the audit result has been generated.

     

9.9 Conclusions

Cloud computing drives the vast spectrum of emerging and innovative applications, products and services and is also a key technology enabler for the future Internet. Its direct economic value is unambiguously substantial, but taking full advantage of cloud computing requires considerable acceptance of off-the-shelf services, which directly impacts the customer’s perception of transparency in this technology. Through a hypothetical use case, this chapter presented some of the main transparency and assurance barriers that (prospective) customers might find when migrating to the cloud, with a particular focus on SMEs. Furthermore, this chapter described three promising state-of-the-art mechanisms aimed to improve the levels of trust that customers can have in cloud systems, namely, (i) specific risk management frameworks, (ii) security and privacy specification in SLAs and (iii) the assessment of achieved security and privacy levels during the operation of the cloud service. The choice of these mechanisms is not accidental: All three are incrementally developed and have strong dependencies amongst each other. For example, associated risks are continuously monitored through the assessment of agreed SLAs.

However, the analysis presented in this chapter acknowledges that prior to any meaningful use and standardisation of the proposed mechanisms by the academic or industrial communities, effort should be invested into empirical validation of the security and privacy elements composing these SLAs. In particular we refer to the evaluation of their feasibility in real-world scenarios. An entire research agenda should be developed by cloud stakeholders to guarantee the creation of standards and best practices reflecting Cloud-Adapted Risk Management Frameworks, secSLA/PLA elements that are feasible to deploy and trade-offs associated to continuous monitoring mechanisms. These efforts will pave the road for the broad adoption of tools like the ones presented in this chapter.

Finally, we have illustrated a variety of accountability mechanisms that provide novel ways of improving cloud assurance and transparency, at various stages of an organisational lifecycle for accountability.

9.10 Review Questions

  1. 1.

    What is accountability?

     
  2. 2.

    How can an accountability-based approach improve cloud assurance and transparency?

     
  3. 3.

    Explain how risk assessment, SLAs and certification relate to accountability-based approaches.

     
  4. 4.

    What is the need for specific Cloud-Adapted Risk Management Frameworks?

     
  5. 5.

    Mention some advantages related to the use of security and privacy SLAs with respect to security and privacy certification.

     
  6. 6.

    Give some examples of accountability mechanisms for the cloud.

     

Footnotes

Notes

Acknowledgements

This work is supported in part by EC FP7 SPECS (grant no. 610795) and by EC FP7 A4CLOUD (grant no: 317550). We would like to acknowledge the various members of these projects who contributed to the approach and technologies described.

References

  1. 1.
    Alnemr R, Pearson S, Leenes R, Mhungu R (2014) COAT: cloud offerings advisory tool. In: Proceedings of CloudCom, IEEE, pp 95–100Google Scholar
  2. 2.
    Alnemr R et al (2015) A data protection impact assessment methodology for cloud. In: Proceedings of Annual Privacy Forum (APF), LNCS, Springer, October 2015 (to appear)Google Scholar
  3. 3.
    American Institute of Certified Public Accountants and Canadian Institute of Chartered Accountants (AICPA-CICA) (2015) Privacy maturity model. Available via http://www.cica.ca/resources-and-member-benefits/privacy-resources-for-firms-and-organizations/item47888.aspx. Cited 1 June 2015
  4. 4.
    Bennett CJ, Raab CD (2006) The governance of privacy: policy instruments in global perspective. MIT Press, Cambridge, MAGoogle Scholar
  5. 5.
    Butin D, Chicote M, Le Metayer D (2013) Log design for accountability. In: Proceedings of IEEE CS Security and Privacy Workshops (SPW), pp 1–7Google Scholar
  6. 6.
    Cayirci E, Garaga A, Santana de Oliveira A, Roudier Y (2014) A cloud adoption risk assessment model. In: Proceedings of Utility and Cloud Computing (UCC), IEEE/ACM, pp 908–913Google Scholar
  7. 7.
    Centre for Information Policy Leadership (CIPL) (2014) A risk-based approach to privacy: improving effectiveness in practice. Available via http://www.hunton.com/files/upload/Post-Paris_Risk_Paper_June_2014.pdf. Cited 1 June 2015
  8. 8.
    Cloud Accountability Project (A4Cloud). www.a4cloud.eu
  9. 9.
    Cloud Security Alliance (CSA): Cloud Controls Matrix (CCM). Available via https://cloudsecurityalliance.org/research/ccm/
  10. 10.
    CSA: Cloud Trust Protocol (CTP). Available via https://cloudsecurityalliance.org/research/ctp/
  11. 11.
    CSA: Open Certification Framework (OCF). Available via https://cloudsecurityalliance.org/star/
  12. 12.
    CSA: Privacy Level Agreement (PLA). Available via https://cloudsecurityalliance.org/research/pla/
  13. 13.
    CSA: Secure Cloud (2014). Available via https://cloudsecurityalliance.org/events/securecloud2014/
  14. 14.
    European Commission (EC) (2012) Proposal for a regulation of the European Parliament and of the Council on the protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation), Brussels, January 2012Google Scholar
  15. 15.
    EC (2013) Cloud computing service level agreements: exploitation of research resultsGoogle Scholar
  16. 16.
    EC (2014) Cloud service level agreement standardisation guidelines. C-SIG SLAGoogle Scholar
  17. 17.
    European DG of Justice (Article 29 Working Party) (2010) Opinion 03/2010 on the principle of accountability (WP 173), July 2010Google Scholar
  18. 18.
    European DG of Justice (Article 29 Working Party) (2012) Opinion 05/2012 on cloud computingGoogle Scholar
  19. 19.
    European DG of Justice (Article 29 Working Party) (2014) Statement on the role of a risk-based approach in data protection legal frameworks (WP218). Available via http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp218_en.pdf
  20. 20.
    European Telecommunications Standards Institute (ETSI) Cloud Standards Co-ordination Group (2013) Cloud standards coordination final reportGoogle Scholar
  21. 21.
    European Union Agency for Network and Information Security (ENISA) (2009) Cloud computing – benefits, risks and recommendations for information securityGoogle Scholar
  22. 22.
    ENISA (2014) Cloud certification schemes metaframework. Version 1.0, November 2014Google Scholar
  23. 23.
    Felici M, Pearson S (eds) (2014) Report detailing conceptual framework. Deliverable D32.1, A4CloudGoogle Scholar
  24. 24.
    Felici M, Pearson S (2014) Accountability, risk, and trust in cloud services: towards an accountability-based approach to risk and trust governance. In: Proceedings of Services, IEEE, pp 105–112Google Scholar
  25. 25.
    Gittler F et al (2015) Initial reference architecture. Deliverable 42.3, A4CloudGoogle Scholar
  26. 26.
    Hildebrandt M (ed) (2009) Behavioural biometric profiling and transparency enhancing tools, D 7.12, FIDISGoogle Scholar
  27. 27.
    International Data Corporation (IDC) (2012) Quantitative estimates of the demand of cloud computing in EuropeGoogle Scholar
  28. 28.
    International Organization for Standardization (ISO) (2014) (Draft) Information technology – cloud computing – service level agreement (SLA) framework and terminology. ISO/IEC 19086Google Scholar
  29. 29.
    ISO (2014) Information technology – security techniques: guidelines on information security controls for the use of Cloud computing services based on ISOIEC 27002. ISOIEC 27002Google Scholar
  30. 30.
    Jansen W (2010) Directions in security metrics research. TR-7564. NISTGoogle Scholar
  31. 31.
    JBoss: Drools business rules management system solution. Available via http://www.drools.org/
  32. 32.
    Kavanagh KM, Nicolett M, Rochford O (2014) Magic quadrant for security information and event management. GartnerGoogle Scholar
  33. 33.
    Luna J, Langenberg R, Suri N (2012) Benchmarking cloud security level agreements using quantitative policy trees. In: Proceeding of the Cloud Computing Security workshop, ACMGoogle Scholar
  34. 34.
    Mell P, Grance T (2011) The NIST definition of cloud computing, NIST Special Publication 800-145, September 2011Google Scholar
  35. 35.
    National Institute of Standards and Technology (NIST) (2002) Risk management guide for information technology systems. SP 800-30. NISTGoogle Scholar
  36. 36.
    NIST (2010) Guide for applying the risk management framework to federal information systems. SP 800-37. NISTGoogle Scholar
  37. 37.
    NIST (2013) Cloud computing security reference architecture. NIST SP 500-299, vol 1Google Scholar
  38. 38.
    NIST (2014a) (Draft) Cloud computing: cloud service metrics description. Public RATAX WG, NISTGoogle Scholar
  39. 39.
    NIST (2014b) Cloud-adapted risk management framework. Draft NIST SP 800-173Google Scholar
  40. 40.
    Nymity Inc (2014) Privacy management accountability frameworkGoogle Scholar
  41. 41.
    Organisation for Economic Co-operation and Development (OECD) (2013) Guidelines concerning the protection of privacy and transborder flows of personal dataGoogle Scholar
  42. 42.
    Office of the Information and Privacy Commissioner of Alberta, Office of the Privacy Commissioner of Canada, Office of the Information and Privacy Commissioner for British Colombia (2012) Getting accountability right with a privacy management program, April 2012Google Scholar
  43. 43.
    Pearson S (2011) Toward accountability in the cloud. IEEE Internet Comput 15(4):64–69, IEEE Computer SocietyCrossRefGoogle Scholar
  44. 44.
    Pearson S (2014) Accountability in cloud service provision ecosystems. In: Secure IT systems, LNCS, vol 8788, Springer, pp 3–24Google Scholar
  45. 45.
    Pearson S, Wainwright N (2013) An interdisciplinary approach to accountability for future internet service provision. IJTMCC 1(1):52–72CrossRefGoogle Scholar
  46. 46.
    Pulls T, Martucci L (2014) User-centric transparency tools. D-5.2, vol 1, A4CloudGoogle Scholar
  47. 47.
    Ruebsamen T, Pulls T, Reich C (2015) Secure evidence collection and storage for cloud accountability audits. In: Proceedings of CLOSER 2015, Lisbon, Portugal, 20–22 May 2015Google Scholar
  48. 48.
    Stoneburner G, Hayden C, Feringa A (2004) Engineering principles for information technology security (A baseline for achieving security). SP800-27, NISTGoogle Scholar
  49. 49.
    Telecom Italia: Java Agent Development Environment (JADE). http://jade.tilab.com
  50. 50.
    Telecom Italia: JADE Agent Communication Language (ACL) (2005). Retrieved from http://jade.tilab.com/doc/api/jade/lang/acl/package-summary.html
  51. 51.
    Wang C, Zhou Y (2010) A collaborative monitoring mechanism for making a multitenant platform accountable. In: Proceedings of HotCloud. Available from https://www.usenix.org/legacy/event/hotcloud10/tech/full_papers/WangC.pdf
  52. 52.
    Wlodarczyk, Tomasz et al (2014) A4Cloud project: DC-8.1 framework of evidence. A4CloudGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Security and Manageability Lab, Hewlett Packard LabsBristolUK
  2. 2.Cloud Security AllianceScotlandUK
  3. 3.Furtwangen UniversityFurtwangenGermany

Personalised recommendations