Improving Cloud Assurance and Transparency Through Accountability Mechanisms
Accountability is a critical prerequisite for effective governance and control of corporate and private data processed by cloud-based information technology services. This chapter clarifies how accountability tools and practices can enhance cloud assurance and transparency in a variety of ways. Relevant techniques and terminologies are presented, and a scenario is considered to illustrate the related issues. In addition, some related examples are provided involving cutting-edge research and development in fields like risk management, security and Privacy Level Agreements and continuous security monitoring. The provided arguments seek to justify the use of accountability-based approaches for providing an improved basis for consumers’ trust in cloud computing and thereby can benefit from the uptake of this technology.
KeywordsAccountability Assurance Cloud computing Continuous monitoring Privacy level agreement (PLA) Service level agreement (SLA) Transparency
The National Institute of Standards and Technology (NIST) defines cloud computing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction” . This model enables a very significant change in the information technology (IT) landscape that can allow benefits to organisations including economies of scale, reduced spending on technology infrastructure and improved accessibility and flexibility. As a result, more and more organisations, in particular small- and medium-sized enterprises (SMEs), are using the cloud, either as end users or in order to become part of the service supply chain.
However, major perceived barriers to cloud usage are still the lack of transparency and assurance of the cloud service providers (CSPs). The lack of transparency can arise due to different reasons and be compounded by lack of knowledge and action by data controllers and data processors (e.g. about the chain processing, inadequate levels of data protection or illegality of transfers or the law applicable to data protection disputes) . This is one aspect of the lack of accountability in the data protection context : Accountability was seen in a recent report by the International Data Corporation (IDC) as the most important action that business users thought would help improve cloud adoption . In order for potential cloud customers to make appropriate assessments about the suitability in a given context of using the cloud, and indeed particular cloud service providers, it is very important that these customers are provided with both the relevant information about the cloud service providers and an awareness of the type of things they should be assessing, which include not only organisational security risk aspects but also consideration of potential harm to data subjects and how that might be avoided, as well as wider regulatory and contractual compliance aspects. Furthermore, in the case of data breaches, notifications must be provided to a number of parties that may include data subjects, cloud customers and regulatory authorities, and enhanced transparency in this respect may affect not only follow-up actions relating to remediation and redress but also enhance future decision-making processes.
In this chapter we argue that more transparency and assurance can be achieved through an accountable-based approach where new processes, mechanisms and tools can be put in place. After assessing the state-of-the-art and current inadequacies in this regard in Sect. 9.2, we consider in Sect. 9.3 how accountability relates to transparency and assurance. In Sect. 9.4 we present a motivating cloud scenario with reference to which further discussions will be made. In Sects. 9.5, 9.6 and 9.7, we explain how greater transparency and assurance may be achieved via three core elements: a risk assessment approach adapted to cloud service providers (CSPs); improved levels of assurance through Service Level Agreements (SLAs) and Privacy Level Agreements (PLAs); transparency during the service provision, including usage of continuous monitoring. Furthermore, in Sect. 9.8, we present novel mechanisms that can be provided at different phases of an accountability lifecycle. These demonstrate how risk management and transparency can be enhanced in the provisioning for accountability phase, how continuous monitoring can be utilised in an operational phase and how audit and certification can be linked to policies such as PLAs. Finally, we present conclusions.
9.2 State of the Art
In this section we provide a brief introduction to related work in order to frame the context and contribution of our chapter.
HP ArcSight1: This is oriented to large-scale security event management and offers appliance-based preconfigured monitoring of log data, management functions and reporting.
IBM Security QRadar2: This provides log management, event management, reporting and behavioural analysis for networks and applications.
EMC Corp. RSA Security Analytics5: This provides log and full packet data capture, security monitoring forensic investigation and big data analytic methods.
SIEM solutions do not cover business audit and strategic (security) risk assessment but instead provide inputs that need to be properly analysed and translated into a suitable format to be used by senior risk assessors and strategic policymakers. Risk assessment standards such as ISO 2700x,6 NIST , etc. operate at a macro level and usually do not fully utilise information coming from logging and auditing activities carried out by information technology (IT) operations. Similarly, there exist a number of frameworks for auditing a company’s IT controls, most notably COSO7 and COBIT.8
Sumo Logic9: This is a log management platform that allows a cloud provider to collect log data from applications, network, firewalls and intrusion detection system.
Amazon Web Services (AWS) CloudTrail10: This is a Web service that records AWS API calls for a customer’s account and delivers log files to the customer. CloudTrail records important information about each API call, including the name of the API, the identity of the caller, the time of the API call, the request parameters and the response elements returned by the AWS service. This information helps to track changes made to your AWS resources and to troubleshoot operational issues. The main purpose of CloudTrail is to make it easier to ensure compliance with internal policies and regulatory standards.
Logentries11: This collects and analyses logs across software stacks using a preprocessing layer to filter, correlate and visualise log data.
Different security controls can be identified that organisations need to implement in the cloud (see, e.g. ). From a management viewpoint, it is possible to identify critical processes (e.g. security risk assessment and privacy management) that address the mitigation of security and privacy threats , and the essential elements of a good organisational privacy management programme have been defined . Furthermore, an accountability framework for the cloud context has been described [44, 45], in which different types of accountability tools (usable either individually or in combination) may be provided for cloud users, providers and governance entities, ranging from preventive tools that aid informed choice, control and decision making upfront and thereby can reduce privacy and security risk, to detective tools that monitor for and report policy violations, to corrective tools that ameliorate remediation and redress in case of failure. Certification can be an important aspect of such frameworks. ENISA has developed a Cloud Certification Schemes Metaframework (CCSM) that classifies different types of security certification (which are aligned with specific standards) for cloud providers . This meta-framework is used to compare a number of different certifications identified within the Cloud Certification Schemes List (CCSL). The overall objective of this framework is to make the cloud more transparent for cloud customers, in particular, in the way cloud providers meet specific security objectives.
In general, technical security measures (such as open strong cryptography) can help prevent falsification of logs, and privacy-enhancing techniques and adequate access control should be used to protect personal information in logs. Relevant techniques that can be used include non-repudiable logs, backups, distributed logging, forward integrity via use of hash chains and automated tools for log audits. More specifically, in the work of , a collaborative monitoring mechanism is proposed for making multitenant platforms accountable. A third-party external service is used to provide a “supporting evidence collection” that contains evidence for Service Level Agreement (SLA) compliance checking (defined distinctively from runtime logs). This type of service is presented as an accountability service, in other words one that offers “a mechanism for clients to authenticate the correctness of the data and the execution of their business logic in a multitenant platform”. The external accountable service contains a Merkle B-tree structure with the hashes of the operation signatures concatenated with the new values of data after occurrence of state changes. This work includes algorithms for logging and request processes and an evaluation of a testing environment implemented in Amazon EC2. Furthermore, Butin et al.  propose a framework for accountability of practice using privacy-friendly logs. They take the approach of using formal methods to define the “accounts” and the accountability process to analyse these accounts. In the accountability process, the abstract events and logs are related, and the correctness is proved such that an abstract event denotes a concrete log, thus allowing analysis of the abstract events for compliance instead of a log. The compliance with respect to policies were formally checked and proved. One of the concerns as identified by the authors is to verify that the logs, which are the basis of any accountability system, reflect the actual and complete activities of the data controller. In general, their work models an accountability framework that is formally proved and verified which is in contrast to the approach described in this chapter, which is an implementation solving the problem of collecting evidence, analysing it, generating policy-based notifications/violations and generating audit reports.
9.3 The Relationship Between Accountability and Assurance
Accountability is the state of accepting allocated responsibilities (including usage and onward transfer of data to and from third parties), explaining and demonstrating compliance to stakeholders and remedying any failure to act properly. Responsibilities may be derived from law, social norms, agreements, organisational values and ethical obligations. Furthermore, accountability for complying with measures that give effect to practices articulated in given guidelines has been present in many core frameworks for privacy protection . Within cloud ecosystems, accountability is becoming an important (new) notion, defining the relations between various stakeholders and their behaviours towards data in the cloud.
Norms: the obligations and permissions that define data practices
Behaviour: the actual data processing behaviour of an organisation
Compliance: entails the comparison of an organisation’s actual behaviour with the norms
By the accountor exposing the norms it subscribes to and its behaviour, the accountee can check compliance. Norms can be expressed in policies and they derive from law, contracts and ethics. We consider further the definition and exposure of certain types of norms in a cloud context in Sect. 9.6 (focusing on SLAs).
Accountability and good systems design (in particular, to meet privacy and security requirements) are complementary, in that the latter provides mechanisms and controls that allow implementation of principles and standards, whereas accountability makes organisations responsible for providing an appropriate implementation for their business context and addresses what happens in case of failure (i.e. if the account is not provided and is not adequate; if the organisation’s obligations are not met, e.g. there is a data breach; etc.). Section 9.5 further elaborates the cloud-adapted risk assessment methodology targeting the elicitation of controls from an accountable perspective. Part of the justification that appropriate measures have been use comes from an enhanced risk assessment process. The role of a risk-based approach in data protection has been considered by a number of parties, including: as an assessment of the relative values of such an approach , modifying the data protection principles to take this into account , analysing the relationship with accountability  and recent regulatory analysis [7, 19].
Typically in a cloud ecosystem in a data protection context, the accountors are cloud actors which are organisations (or individuals with certain responsibilities within those) acting as a data steward for other actors’ personal data or business secrets. The accountees are other cloud actors, which may include private accountability agents, consumer organisations, the public at large and entities involved in governance. In addition, a connection between appropriateness and effectiveness can be made through the agreed SLA, which will contain committed security and privacy values relating to each metric selected, as discussed further in Sect. 9.6.
A core attribute of accountability is transparency, which is “the property of a system, organisation or individual of providing visibility of its governing norms, behaviour and compliance of behaviour to the norms” . A distinction can be made between ex ante transparency, which is concerned with “the anticipation of consequences before data is actually disclosed (e.g. in the form of a certain behaviour)”  and ex post transparency, which is concerned with informing “about consequences if data already has been revealed” . Being transparent is required not only with respect to the identified objects of the cloud ecosystem (i.e. norms, behaviour and compliance) but also with respect to remediation. In Sect. 9.7, we consider further how transparency may be provided during cloud service provision.
Although organisations can select from accountability mechanisms and tools in order to meet their context, the choice of such tools needs to be justified to external parties. A strong accountability approach would include moving beyond accountability of policies and procedures, to accountability of practice giving accountability evidence. Accountability evidence can be defined as “a collection of data, metadata, routine information, and formal operations performed on data and metadata, which provide attributable and verifiable account of the fulfilment (or not) of relevant obligations; it can be used to support an argument on the validity of claims about appropriate functioning of an observable system” .
Accountability evidence contributes towards the more general concept of assurance, which can be viewed as “Grounds for confidence that the other four security goals (integrity, availability, confidentiality, and accountability) have been adequately met by a specific implementation. “Adequately met” includes (1) functionality that performs correctly, (2) sufficient protection against unintentional errors (by users or software), and (3) sufficient resistance to intentional penetration or bypass” . Different kinds of assurance need to be provided during the accountability lifecycle described in Sect. 9.8. In particular, as we shall consider further in Sect. 9.8, in an initial phase of provisioning for accountability, assurance (based, e.g. on assessment of capabilities) needs to be provided about the appropriateness and effectiveness of the cloud service providers under consideration; from an operational perspective, there needs to be both internal and external demonstration (to relevant stakeholders) that the organisation is operating in an accountable manner (e.g. built on monitoring evidence and via accounts); external parties will be involved in audit and validation based on information made available to them.
Before moving on to consider these aspects in more detail, we present an illustrative cloud scenario that will be used as a reference point.
9.4 Case Study
This section describes a scenario to motivate the main transparency-/assurance-related issues that confront cloud customers, in particular small- and medium-sized enterprises (SMEs) embracing the cloud.
“Wearable Co” is a manufacturer of wearable devices that needs to select a CSP to build a Web-based service on its behalf. This service should facilitate processing and storing wearable data while providing user-level functionalities, which will be consumed by the Wearable customers via a Web user interface (UI). The wearable data will be collected in two ways: automatic collection via the wearable devices and manual input from the customer via the Web application/UI. Part of the application will involve visualisation of aggregated statistics on maps. The service should be realised through one or more CSPs (i.e. a cloud service supply chain).
Wearable Co has concerns with respect to the implementation of the Wearable service in the cloud. These concerns are driven by the Data Protection Authority (DPA) observing the fulfilment of the legal framework and the type of personal data that should be collected. In this case, Wearable Co should ensure that all the personal data collected by the Wearable customers (either automatically or manually) is protected at all phases on their processing, while a specific set of geographical data centres should be considered for storing. At the same time, this SME should apply specific data access and data handling rules that should be enforced at runtime by all the involved stakeholders in this Wearable service.
Let us also consider CardioMon, which is a software-as-a-service (SaaS) CSP offering a complete solution by means of providing features for collecting, managing, storing and processing wellbeing data. CardioMon is doing business with many other customers with similar functional/business requirements as the Wearable Co and has an existing business agreement with DataSpacer, an infrastructure-as-a-service (IaaS) cloud provider, who offers advanced security and privacy mechanisms for the protected storage and retrieval of personal and sensitive information. Furthermore, the CardioMon service allows for the core functionality to be expanded via third-party services to enrich the experience of the users. Such an expansion is provided by Map-on-Web, which is a separate SaaS cloud provider, with expertise on map visualisations for big data sets. Thus, Map-on-Web complements CardioMon by expanding on the available data visualisation features.
The next three sections of the chapter will elaborate on the assurance and transparency issues related to this use case along with the proposed solutions from the risk assessment (Sect. 9.5), Service Level Agreement (Sect. 9.6) and operational provisioning of assurance (Sect. 9.7) perspectives.
9.5 Risk Assessment
Availability issues in any of the involved CSPs, resulting in the full/partial unavailability of the Wearable service
Insider attacks on the IaaS provider (DataSpacer) resulting on the customers’ personal identifiable information (PII) being leaked
Vendor lock-in issues forbidding Wearable Co to migrate their business to a different cloud supply chain
Lack of service transparency (in particular related to data handling), resulting in customers being unaware of where their (own) data is being stored and processed
As IT functions are spread across the cloud, companies like Wearable Co will need not only event monitoring systems that cross cloud boundaries but also assurance systems that demonstrate that each CSP is enforcing their required policies and that the combination adequately manages risk. However, the missing link amongst these objectives is a common model to allow assessment of risk based on some trust assumptions and how to use such representations to derive contracts, policies and security/privacy controls that will enable accountability.
The type of cloud delivery model and the service type that are chosen for adoption, in association with security controls selected for the ecosystem, need to be chosen in such a way that the system preserves its security posture. Therefore, a properly performed risk management cycle should ensure that the residual risk remaining after securing the ecosystem is minimal and that the system achieves a security posture equivalent to that of an on-premise technology architecture or solution (in-house Wearable solution). Conversely, the type of selected deployment model has an impact on the distribution of security responsibilities amongst the cloud actors, as related to the security conservation principle .
Despite the variety of approaches in cloud risk management derived from relevant works , the challenges associated with complex supply chains (from a risk management perspective) have only recently resulted in new initiatives to address them. The key element for the successful adoption of assurance and transparency in a cloud-based system solution is the cloud consumers’ full understanding of the cloud-specific traits and characteristics, the architectural components for each cloud service type and deployment model, along with each cloud actor’s precise role in orchestrating a secure ecosystem. How confident cloud customers feel if the risk related to using cloud services is acceptable depends on how much trust they place in those involved in the surrounding cloud ecosystem’s orchestration. The risk management process ensures that issues are identified and mitigated early on in the investment cycle and followed by periodic reviews. Since cloud customers and other cloud actors involved in securely orchestrating a cloud ecosystem (cf. Fig. 9.2) have differing degrees of control over cloud-based IT resources, they need to share the responsibility of implementing and continuously assessing the security requirements.
Furthermore, it is essential for the cloud consumers’ business and mission-critical processes that they be able to identify all cloud-specific risk-adjusted security and privacy controls. Cloud consumers need to leverage their contractual agreements to hold the CSPs accountable for the implementation of the security and data protection controls. They also need to be able to assess the correct implementation and continuously monitor all identified security controls. But what are the elements of a successful cloud risk management strategy in order to enable transparency and assurance? The draft NIST SP 800-173, Cloud-Adapted Risk Management Framework (CRMF) , is one of the most relevant works addressing this issue.
Managing risks in the Wearable Co scenario
Wearable Co. implementation
Step 1 – Conduct an impact analysis to categorise the information system that has been migrated to the cloud and the information that is processed, stored and transmitted by that system
This is carried by the Wearable Co itself, usually during the design phase of the service. The customers’ PII must be clearly identified in accordance to applicable regulations
Step 2 – Identify the security and privacy requirements of the system by performing a risk assessment. Select the baseline and tailored supplemental security and privacy controls
Wearable Co might decide to perform the assessment considering the risk classification proposed by ENISA . Security and privacy controls can be mapped to the Cloud Security Alliance (CSA) Cloud Control Matrix (CCM ) and Privacy Level Agreements (PLAs ) best practices
Step 3 – Select the cloud ecosystem architecture that best suits the assessment results for the system
Wearable Co should decide on both the cloud deployment and cloud service model to use (i.e. the supply chain shown in Fig. 9.2)
Step 4 – Assess the service provider options. Identify the security controls needed for the system the cloud provider has implemented. Negotiate the implementation of any additional security controls that are identified. Identify any remaining security controls that fall under the cloud consumer’s responsibility for their implementation
In order to make a well-informed decision based on elicited security and privacy controls (cf. Step 2), the Wearable Co might decide to search and compare CSP offers based on publicly available information. Repositories like the CSA STAR  and information contained in applicable certifications and Service Level Agreements (SLAs) will prove useful during this stage. Section 9.6 will further expand on these topics
Step 5 – Select and authorise a cloud provider to host the cloud consumer’s information system. Draft a Service Level Agreement that lists the negotiated contractual terms and conditions
Wearable Co agrees on the SLAs with the different CSPs participating in the supply chain, i.e. CardioMon, Map-on-Web and DataSpacer, in Fig. 9.2. Assurance and transparency in relationship to cloud SLAs will be further explained in the next section
Step 6 – Monitor the cloud provider to ensure that all Service Level Agreement terms are being met. Ensure that the cloud-based system maintains the necessary security and privacy posture. Monitor the security controls that fall under the cloud consumer’s responsibility
This is an essential stage to fully close the assurance and transparency lifecycle. Wearable Co (and also other CSPs) should need to deploy the required continuous monitoring/certification mechanisms to guarantee the fulfilment of its security and privacy requirements during the service operation. This will be further analysed in Sect. 9.7
The described risk-based approach to managing information systems is a holistic activity that should be fully integrated into every aspect of the Wearable Co scenario, from planning and system development lifecycle processes (Steps 1–2) to security/privacy controls allocation (Steps 3–5). The selection and specification of security and privacy controls support effectiveness, efficiency and constraints via appropriate laws, directives, policies, standards and regulations. The resulting set of security and privacy controls (baseline, tailored controls, controls inherited from the supply chain and under customer’s direct implementation and management) derived from applying the CRMF (Steps 1–4) lead gradually to the creation of the applicable cloud SLA in the CRMF’s Step 5, as explained next.
9.6 Service Level Agreements
How can the (non-security expert) Wearable Co meaningfully assess if the CSPs in the supply chain fulfil their security/privacy requirements?
How do all CSPs (including the Wearable Co itself) provide assurance and transparency during the full cloud service lifecycle?
This section will focus on exploring the first issue and how it is being solved using the state of the art through the use of security and privacy parameters in cloud SLAs. Operational assurance/transparency will be discussed in Sect. 9.7.
With regard to the implementation of the elicited security and privacy controls (cf. Step 1 in Table 9.1), the CSPs within the supply chain can only assume the type of data the Wearable Co customer will generate and use during the operational phase of the cloud service; therefore, the CSPs are not aware of the additional security and privacy requirements and tailored controls deemed necessary to protect the Wearable Co’s customer PII. Customers require the mechanisms and tools that enable them to understand and assess what “good-enough security” means in the cloud. This requirement is critical when assessing if, for example, the Wearable Co security and privacy requirements are being fulfilled by the controls and certifications implemented by the selected CSPs.
Fortunately, stakeholders in the cloud community have identified that specifying security and privacy parameters in Service Level Agreements (termed as secSLA and PLA, respectively, in the rest of this chapter) is useful to establish a common semantics to provide and manage security assurance from two perspectives, namely, (i) the security/privacy level being offered by a CSP and (ii) the security/privacy level requested by a cloud customer.
In order to develop the full context of the value of secSLAs and PLAs for our case study, we introduce next the rationale of SLA usage along with the basic vocabulary.
9.6.1 Importance of secSLAs and PLAs for Cloud Transparency
Contracts and Service Level Agreements (SLAs) are key components defining cloud services. According to the ETSI Cloud Standards Coordination group , SLAs should facilitate cloud customers in understanding what is being claimed for the cloud service and in relating such claims to their own requirements. Naturally, better assessments and informed user decisions will also increase trust and transparency between cloud customers and CSPs.
A recent report from the European Commission  considers SLAs as the dominant means for CSPs to establish their credibility, attract or retain cloud customers since they can be used as a mechanism for service differentiation in the CSP market. This report suggests a standardised SLA specification aiming to achieve the full potential of SLAs, so all cloud customers can understand what is being claimed for the cloud service and relate those claims to their own requirements.
At the SecureCloud 2014 event , the Cloud Security Alliance (CSA) compiled and launched an online survey to better understand the current usage and needs of European cloud customers and CSPs related to SLAs. Almost 200 equally balanced cloud customer and CSP responders (80 % from the private sector, 15 % from the public sector and 5 % from other backgrounds) provided some initial findings on the use of standardised cloud SLAs. Respondents ranked the two top reasons why cloud SLAs are important as (1) being able “to better understand the level of security and data protection offered by the CSP” (41 %) and (2) “to monitor the CSP’s performance and security and privacy levels” (35 %). Furthermore, based on the respondents’ experiences, the key issues needed to make cloud SLAs “more usable” for cloud customers highlighted (1) the need for “clear SLO [Service Level Objective] metrics and measurements” in first place (66 %), (2) “making the SLAs easy to understand for different audiences (managers, technical legal staff, etc.)” in second place (62 %), (3) “having common/standardised vocabularies” (58 %) in third place and (4) “clear notions of/maturity of SLAs for Security and Privacy” (52 %) in fourth place. These responses are empirical indicators of the need to develop the field of cloud secSLAs and PLAs and the techniques to reason about them.
9.6.2 How Are secSLAs and PLAs Structured?
This section summarises the basic cloud secSLA/PLA terminology, based (where applicable) on the latest version of the relevant ISO/IEC 19086 standard  and the CSA PLA initiative . A cloud SLA is a documented agreement between the CSP and the customer that identifies cloud services and SLOs, which are the targets for service levels that the CSP agrees to meet. If a SLO defined in the cloud SLA is not met, the cloud customer may request a remedy (e.g. financial compensation). If the SLOs cannot be (quantitatively) evaluated, then it is not possible for customers or CSPs to assess if the agreed SLA is being fulfilled. This is particularly critical in the case of secSLAs and PLAs, but there is also an open challenge on how to define useful (and quantifiable) security and privacy SLOs.
Next we will develop an example related to secSLAs, but the same methodological approach is also applicable to PLAs. In Fig. 9.3, let us suppose that a CSP implements the secSLA Control “Entitlement (i.e. EKM-01)” from the CSA CCM . As observed in the figure, this control is actually contained within the group “Encryption and Key Management (i.e. EKM)”. After selecting EKM-01, the same CSP then refers to the SLO list provided on the C-SIG SLA report  (or any other relevant standard) and finds out that two different SLOs are associated with control EKM-01, i.e. “cryptographic brute force resistance” and “hardware module protection level”. Both SLOs are then refined by the CSP into one of more security metrics, which are then specified as part of the secSLA offered to the cloud customer. For example, a CSP can commit to a “cryptographic brute force resistance” measured through security levels such as level1–level8 or through a metric called “FIPS compliance” defined as Boolean YES/NO values. Therefore, the secSLA could specify two SLOs: (cryptographic brute force resistance = level4) and (FIPS compliance = YES). If any of these committed values is not fulfilled by the CSP, then the secSLA is violated, and the customer might receive some compensation (this is the so-called SLA remediation process).
Using the presented approach, the security and privacy SLOs proposed by the CSP can be matched to the cloud customer’s requirements before acquiring a cloud service. Actually, these SLOs provide a common semantics that both customers and CSPs can ultimately use to automate the management of cloud secSLAs and PLAs during the service provision. This will be further elaborated in the next section.
In our Wearable Co scenario, it is important to highlight that it does not suffice to understand how the Wearable service may affect its own customers, but one also needs to consider how the sub-services in the supply chain (i.e. CardioMon, Map-on-Web and DataSpacer in Fig. 9.2) contribute to the overall security and privacy levels. Hence, there is a distinct need for aggregation of metrics guaranteed by individual cloud services in order to get the values for a composite one. While practitioners have acknowledged the challenges associated with the composition of security (and privacy) metrics long before the “cloud times” , nowadays this topic is still mostly unexplored in cloud systems.
9.7 Continuous Assurance During Service Provision
The CRMF process shown in Table 9.1 highlights the fact that better levels of transparency can be achieved if the CSP continuously provides the expected assurance levels to the customer, during the whole provision of the cloud service. Based on the discussion presented in Sect. 9.6, the notion of continuous assurance can be related to reassessing the risk levels achieved during the operation of the service through analysing the compliance with agreed secSLAs and PLAs.
Let us take as a starting point the use case presented in Fig. 9.2, supposing that Wearable Co has agreed and signed SLAs (i.e. secSLAs and PLAs) with all the CSPs in the supply chain (i.e. CardioMon, Map-on-Web and DataSpacer). According to current practice, SLAs are signed by peers, which means that in our use case Wearable Co should have signed three different SLAs. However, in a real-world scenario, it might be also common for involved CSPs to sign SLAs, e.g. one SLA between CardioMon and DataSpacer and other between Map-on-Web and DataSpacer. This discussion is important to establish the way in which signed SLAs will be continuously assessed during the operation of the Wearable service. In Fig. 9.2, despite the fact that the security and privacy levels depend on several components of the supply chain, the responsibility of reporting these levels of overall service remains that of the “primary” CSP at the end of the supply chain: the CSP that directly faces the customer (i.e. Wearable Co). It is the responsibility for this “primary” CSP to gain assurance about the “secondary” CSPs involved in the supply chain. If the “primary” CSP uses a continuous monitoring mechanism to exchange information with customers, and if this “primary” CSP uses the same mechanism to exchange data with “secondary” CSPs, then it may choose to “proxy” some information originating from the supply chain back to the customer. This proxy approach is useful to automate the discovery of all entities in the supply chain. Providing visibility on the supply chain is often considered as a compliance requirement for CSPs with regard to EU data protection rules. Later on this section, we will discuss the CSA Cloud Trust Protocol (CTP), a mechanism than can be implemented to automate this process.
However, which elements should be assessed on a secSLA/PLA? Which mechanisms are available to continuously monitor cloud assurance? Once a cloud secSLA/PLA is built and agreed with the CSP, the customer now has a baseline to monitor the fulfilment of the agreed SLOs. The SLA can be evaluated through (i) the analysis of the fulfilment of agreed security SLOs and (ii) the identification of potential deviations from expected values (i.e. SLA violations). Intuitively, these violations can be managed by the CSP through actions ranging from changes to the current secSLA/PLA to termination of the agreed cloud service. Continuous assessment of agreed SLAs should consider on the one hand the hierarchical organisation of the SLOs (cf. Fig. 9.3) and their quantitative/qualitative nature. Both challenges have just started to be studied by the academic community , and we foresee the midterm adoption of these approaches in real-world deployments.
From the continuous monitoring mechanism perspective, despite the apparent feasibility of the control/monitoring approach, to the best of our knowledge, there are very few efforts exploring this area. One of the recent developments in the area of continuous monitoring is CSA’s Cloud Trust Protocol (CTP)  which is an open API to enable cloud customers to query CSPs about the security/privacy levels of their services. A key design choice that has shaped CTP is the focus on the monitoring of secSLAs/PLAs, rather than the pure monitoring of security/privacy controls. The CTP API is designed to be a RESTful protocol that cloud customers can use to query a CSP on current security/privacy attributes related to a cloud service such as the current level of availability or information on the last vulnerability assessment. This can be done in a classical query-response approach, but the CTP API also has the ability to specify event triggers on the CSP that may optionally be reported in push mode to a specific customer. These triggers allow the cloud customers to be notified of important security/privacy events that occurred in near real time. The CTP API additionally provides access to a log facility that can be used to store and access security events generated by triggers.
At the state of the art, CSA CTP is the main enabler of the upcoming continuous monitoring procedure to be implemented by the certification scheme called the Open Certification Framework (OCF) .
9.8 Example Tools
Phase 1 – Provisioning for Accountability: works on risk identification (based on business impact, not just technology), control identification, control implementation design and control implementation through technology and processes. Examples include the Cloud Offerings Advisory Tool (COAT) and Data Protection Impact Assessment Tool (DPIAT) described in Sect. 9.8.1.
Phase 2 – Operating in an Accountable Manner: corresponds to the operational (production) phase of the solution and includes all the associated management processes. The Audit Agent System (AAS) monitors the infrastructure and collects policy-based evidence to prove that the cloud provider operates the infrastructure in an accountable manner. This tool is presented in Sect. 9.8.2.
Phase 3 – Audit and Validate: corresponds to the assessment of the effectiveness of the controls which have been deployed, the necessary reporting, and paves the way to the tuning (adaption) of the measures deployed to ensure that obligations are being met. Based on the evidence collection, the Audit Agent System (AAS) generates policy violations and audit reports, as discussed in Sect. 9.8.3.
9.8.1 Phase 1: Provisioning for Accountability
Evaluation of cloud offerings and contract terms with the goal of enabling a more educated decision making on which service and service provider to select
Assessment of the risks associated with the proposed usage of cloud computing, involving personal and/or confidential data and elicitation of actionable information and guidance on how to mitigate them
These two mechanisms are being developed as the following distinct tools that may be used separately or in combination.
126.96.36.199 Cloud Offerings Advisory Tool (COAT)
COAT is a cloud brokerage tool that allows potential cloud customers (with a focus on end users and SMEs) to make informed choices about data protection, privacy, compliance and governance, based upon making the cloud contracts more transparent to cloud customers. A number of related factors vary across cloud providers and are reflected in the contracts, for example, subcontracting, location of data centre, use restriction, applicable law, data backup, encryption, remedies, storage period, monitoring/audits, breach notification, demonstration of compliance, dispute resolution, intellectual property rights on user content, data portability, law enforcement access and data deletion and retention. The focus of the tool is on providing feedback and advice related to properties that reflect compliance with regulatory obligations rather than providing feedback on qualitative performance aspects (such as availability), although potentially the tool could be integrated with other tools that offer the latter.
Ongoing research involves usage of ontologies for more sophisticated reasoning and linkage to PLA terms and usage of maturity and reputational models to optimise ordering of the outputs. For further information about the system, see .
188.8.131.52 Data Protection Impact Assessment Tool (DPIAT)
DPIAT is a tool that assesses the proposed use of cloud services, helping users to understand, assess and select CSPs that offer acceptable standards in terms of data protection. The tool is tailored to satisfy the needs of SMEs that intend to process personal data in the cloud; it guides them through the impact assessment and educates them about personal data protection risks, taking into account specific cloud risk scenarios. The approach is based on legal and socio-economic analysis of privacy issues for cloud deployments and takes into consideration the new requirements put forward in the proposed European Union (EU) General Data Protection Regulation (GDPR) , which introduces a new obligation on data controllers and/or processors to carry out a Data Protection Impact Assessment prior to risky processing operations (although the requirements differ slightly across the various drafts of the regulation).
The output of the first phase of the DPIAT reflects advice about whether to proceed to the second phase of assessment. The second phase questionnaire contains a set of 50 questions. The output of this phase is a report that includes: the data protection risk profile, assistance in deciding whether to proceed or not and the context of usage of this tool within a wider DPIA process. Amongst other things, the tool is able to demonstrate the effectiveness and appropriateness of the implemented practices of a cloud provider helping him to target resources in the most efficient manner to reduce risks. The report from this phase contains three sections. The first, project-based risk assessment, is based on the answers to the questionnaire and contains the risk level associated with sensitivity, compliance, transborder data flow, transparency, data control, security and data sharing. The second part displays risks associated with the security controls used by the CSP. It contains the 35 ENISA risk categories  with their associated quantitative and qualitative assessments. The last section highlights additional information that the user needs to know related to requirements associated with GDPR article 33 . The system also logs the offered advice and the user’s decision for accountability purposes. For further information about the system, see .
9.8.2 Phase 2: Operating in an Accountable Manner
The Audit Agent System (AAS) is used to prove that the cloud infrastructure is operated in an accountable manner, by collecting evidence to capture the relevant information for proving accountability. The AAS is associated with policy monitoring, which sniffs the cloud ecosystem transactions to verify that policy configuration is followed in the service chain and appropriate notifications are alerted when data usage is not compliant to contracts.
The evidence collection process builds an information base, which includes the collection of operational evidence (how data is processed in the system demonstrated by logs and other monitoring information), documented evidence (documentation for procedures, standards, policies), configuration evidence (are systems configures as expected), accountability controls, deployed accountability tools and correct implementation of an accountability process. Evidence is not collected purposelessly but requires a distinct reason. This reason is defined in a policy, which is directly mapped to an accountability obligation for which the compliance status shall be checked.
There are various evidence sources to be considered, such as logs, cryptographic proofs, documentation and many more. For each, there needs to be a suitable collection mechanism. For instance, there is a log parser for logs, a cryptographic tool for cryptographic proofs or a file retriever for documentation. This is done by the AAS software agent called the Evidence Collection Agent that is specifically developed for data collection from the corresponding evidence source. Another type of collection agent has client APIs implemented to interface with more complex tools, such as Cloud Management Systems (CMS), that is, one of the major evidence collection sources for cloud resource usage, access rights, configurations, resource provisioning, virtual machine (VM) locations, etc. Evidence Collection Agents can be deployed at different cloud architectural layers (i.e. network, host, hypervisor, IaaS, platform-as-a-service (PaaS), SaaS), with the purpose of collecting, processing and aggregating evidence for enabling validation of the account. Generally, these agents receive or collect information as input and translate that information into an evidence record, before storing it in the Evidence Store. Agent technology helps to ensure extensibility by allowing easy introduction of new evidence sources by adding new collection agents. This approach also allows AAS to address rapid infrastructure changes, which are very common in cloud infrastructures by easily deploying and destroying agents when needed.
Remediation actions that have to be performed after some policy has been violated, for instance, often rely on fine-grained monitoring facilities and extensive analysis capabilities of the resulting evidence.
The AAS tool provides suitable means for the runtime monitoring of cloud applications and infrastructures, the verification of audit policies against the collected data and the reporting of policy violations along with the evidence supporting it (for more details about the architecture, see Sect. 9.8.4).
9.8.3 Phase 3: Audit and Validate
The cloud audit process implemented by AAS is comprised of two main processes: First evidence has to be collected, as described in the previous section, which includes required information to conduct audits. Second, audits in general can be performed periodically, on-demand or continuously. One of the major problems of periodical audits in cloud computing is the dynamic change of the infrastructure, and therefore there is a risk of missing critical violations or incidents if the interval is too big.
Planning Phase: Audit policies are derived from the input policy (e.g. an A-PPL policy), which form an automatic audit plan. Audit tasks define the evidence collection and steps for analysis, i.e. whose evidence has to be collected and how it should be analysed.
Securing Phase: Installation of evidence collection for audit trail collection. Evidence is collected from the evidence sources according to what has been defined in phase 1.
Analysis Phase: Automatic evaluation of the collected evidence according to the defined policies, which results in a statement about (non)compliance with supporting evidence for that claim.
Presentation Phase: Presentation in an audit dashboard and/or generation of a human-readable document, which includes all processed audit tasks including their results.
9.8.4 Architecture of the Audit Agent System
Input: Audit Policy Module (APM)
Runtime Management: Audit Agent Controller (ACC)
Collection and Storage: Evidence Collection Agents, Evidence Store
Processing and Presentation: Evidence Processor, Presenter
These are now considered in turn.
184.108.40.206 Input: Audit Policy Module (APM)
Specifics of the evidence source (IP addresses, Java Agent Development Environment (JADE) agent platform , REST endpoints)
Specifics of the monitored service (path to log files, files to monitor for changes)
Required credentials (authentication strings, usernames, passwords)
Audit type (periodic, continuous, one-time)
220.127.116.11 Runtime Management: Audit Agent Controller (AAC)
According to the input provided by the APM, AAC creates and configures audit policies, its tasks and corresponding collection and processing agents.
Agents are migrated between the core platform and target platforms (near the evidence source).
During the agents’ lifetime, the AAC monitors registered platforms and registered agents, handles exceptions and manages the creation, archival and deletion of evidence stores.
The AAC uses the JADE Agent Communication Language (ACL)  for internal communication between agents. Therefore, the AAC sits at the core of the AAS and manages all operations regarding the orchestration of collection and processing agents, as well as maintaining the Evidence Store. Most notably, the AAC uses UDP-based monitoring of the various agents to ensure a consistent and smooth operation of the AAS.
18.104.22.168 Collection and Storage: Evidence Collection Agents and Evidence Store
The Evidence Collection Agent reads raw evidential data from the source and generates evidence records that are sent to the Evidence Store. The Evidence Store is implemented using transparency log (TL) [46, 47]. Since TL functions as a key-value store for storing evidence records (encrypted messages identified by a key), NoSQL or RDBMS-based back ends for persisting evidence records can be used. All data contained in the Evidence Store is encrypted. The evidence records are encrypted on a per audit task basis, which means only the Audit Policy Agent corresponding to the collection agents is able to decrypt the evidence records for further processing. Isolation between tenants in a single Evidence Store is achieved by providing one container for each tenant where his evidence records are stored. However, even stronger isolation by providing a separate Evidence Store hosted on a separate VM is also possible with this approach.
22.214.171.124 Processing and Presentation: Evidence Processor and Presenter
Retrieval of the appropriated collected information from the Evidence Store (which must be policy/audit task based)
A verification process, which checks the correctness of recorded events according to defined obligations and authorisations
These procedures are inherently dependent on the type of audit task. There can be specific audit tasks defining a single or a small set of checks to be performed (e.g. availability of VMs, results of a proof of retrievability (PoR), etc.) or more complex compliance over time periods (e.g. monthly checks of policy compliance). According to the complexity of the task, due to the number of obligations, or the volume of evidence to analyse, different verification processes may need to be considered, ranging from log mining, checking for predefined tokens or patterns, to automated analysers and automated reasoning upon the audit trail.
For the situations where the audit task consists of defined checks, the Evidence Store is accessed, and the required logs (or other elements) are identified in the related evidence records. More complex compliance checks will involve the retrieval of evidence records covering given periods of time or specifically related to a policy identifier.
The outputs of any audit, including report, notification alerts and messages of non-compliance, are then processed for presentation.
There are two main ways of evidence presentation in AAS. The A-PPL-E Notification Agent is designed to generate violation notification messages, which are consumed by other A4Cloud tools, to reported violation according to what is defined in the A-PPL policy.
Continuous: In continuous mode, the audit agent evaluates evidence records as soon as they are generated by the collection agent. The continuous audit mode is very similar to monitoring with immediate notification if a violation is detected. The time between evidence about a violation or incident being recorded and actual detection and notification is minimal in this scenario. However, since evidence is analysed on the fly, more complex evidence analysis that relies on taking a whole series of records into account is generally harder to implement.
Periodic: In periodic mode, the audit agent evaluates evidence records at specific intervals (e.g. hourly, daily, weekly, etc.).
One-time: In one-time mode, the audit agents, collection agents and the corresponding evidence records are archived immediately after the audit result has been generated.
Cloud computing drives the vast spectrum of emerging and innovative applications, products and services and is also a key technology enabler for the future Internet. Its direct economic value is unambiguously substantial, but taking full advantage of cloud computing requires considerable acceptance of off-the-shelf services, which directly impacts the customer’s perception of transparency in this technology. Through a hypothetical use case, this chapter presented some of the main transparency and assurance barriers that (prospective) customers might find when migrating to the cloud, with a particular focus on SMEs. Furthermore, this chapter described three promising state-of-the-art mechanisms aimed to improve the levels of trust that customers can have in cloud systems, namely, (i) specific risk management frameworks, (ii) security and privacy specification in SLAs and (iii) the assessment of achieved security and privacy levels during the operation of the cloud service. The choice of these mechanisms is not accidental: All three are incrementally developed and have strong dependencies amongst each other. For example, associated risks are continuously monitored through the assessment of agreed SLAs.
However, the analysis presented in this chapter acknowledges that prior to any meaningful use and standardisation of the proposed mechanisms by the academic or industrial communities, effort should be invested into empirical validation of the security and privacy elements composing these SLAs. In particular we refer to the evaluation of their feasibility in real-world scenarios. An entire research agenda should be developed by cloud stakeholders to guarantee the creation of standards and best practices reflecting Cloud-Adapted Risk Management Frameworks, secSLA/PLA elements that are feasible to deploy and trade-offs associated to continuous monitoring mechanisms. These efforts will pave the road for the broad adoption of tools like the ones presented in this chapter.
Finally, we have illustrated a variety of accountability mechanisms that provide novel ways of improving cloud assurance and transparency, at various stages of an organisational lifecycle for accountability.
9.10 Review Questions
What is accountability?
How can an accountability-based approach improve cloud assurance and transparency?
Explain how risk assessment, SLAs and certification relate to accountability-based approaches.
What is the need for specific Cloud-Adapted Risk Management Frameworks?
Mention some advantages related to the use of security and privacy SLAs with respect to security and privacy certification.
Give some examples of accountability mechanisms for the cloud.
This work is supported in part by EC FP7 SPECS (grant no. 610795) and by EC FP7 A4CLOUD (grant no: 317550). We would like to acknowledge the various members of these projects who contributed to the approach and technologies described.
- 1.Alnemr R, Pearson S, Leenes R, Mhungu R (2014) COAT: cloud offerings advisory tool. In: Proceedings of CloudCom, IEEE, pp 95–100Google Scholar
- 2.Alnemr R et al (2015) A data protection impact assessment methodology for cloud. In: Proceedings of Annual Privacy Forum (APF), LNCS, Springer, October 2015 (to appear)Google Scholar
- 3.American Institute of Certified Public Accountants and Canadian Institute of Chartered Accountants (AICPA-CICA) (2015) Privacy maturity model. Available via http://www.cica.ca/resources-and-member-benefits/privacy-resources-for-firms-and-organizations/item47888.aspx. Cited 1 June 2015
- 4.Bennett CJ, Raab CD (2006) The governance of privacy: policy instruments in global perspective. MIT Press, Cambridge, MAGoogle Scholar
- 5.Butin D, Chicote M, Le Metayer D (2013) Log design for accountability. In: Proceedings of IEEE CS Security and Privacy Workshops (SPW), pp 1–7Google Scholar
- 6.Cayirci E, Garaga A, Santana de Oliveira A, Roudier Y (2014) A cloud adoption risk assessment model. In: Proceedings of Utility and Cloud Computing (UCC), IEEE/ACM, pp 908–913Google Scholar
- 7.Centre for Information Policy Leadership (CIPL) (2014) A risk-based approach to privacy: improving effectiveness in practice. Available via http://www.hunton.com/files/upload/Post-Paris_Risk_Paper_June_2014.pdf. Cited 1 June 2015
- 8.Cloud Accountability Project (A4Cloud). www.a4cloud.eu
- 9.Cloud Security Alliance (CSA): Cloud Controls Matrix (CCM). Available via https://cloudsecurityalliance.org/research/ccm/
- 10.CSA: Cloud Trust Protocol (CTP). Available via https://cloudsecurityalliance.org/research/ctp/
- 11.CSA: Open Certification Framework (OCF). Available via https://cloudsecurityalliance.org/star/
- 12.CSA: Privacy Level Agreement (PLA). Available via https://cloudsecurityalliance.org/research/pla/
- 13.CSA: Secure Cloud (2014). Available via https://cloudsecurityalliance.org/events/securecloud2014/
- 14.European Commission (EC) (2012) Proposal for a regulation of the European Parliament and of the Council on the protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation), Brussels, January 2012Google Scholar
- 15.EC (2013) Cloud computing service level agreements: exploitation of research resultsGoogle Scholar
- 16.EC (2014) Cloud service level agreement standardisation guidelines. C-SIG SLAGoogle Scholar
- 17.European DG of Justice (Article 29 Working Party) (2010) Opinion 03/2010 on the principle of accountability (WP 173), July 2010Google Scholar
- 18.European DG of Justice (Article 29 Working Party) (2012) Opinion 05/2012 on cloud computingGoogle Scholar
- 19.European DG of Justice (Article 29 Working Party) (2014) Statement on the role of a risk-based approach in data protection legal frameworks (WP218). Available via http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp218_en.pdf
- 20.European Telecommunications Standards Institute (ETSI) Cloud Standards Co-ordination Group (2013) Cloud standards coordination final reportGoogle Scholar
- 21.European Union Agency for Network and Information Security (ENISA) (2009) Cloud computing – benefits, risks and recommendations for information securityGoogle Scholar
- 22.ENISA (2014) Cloud certification schemes metaframework. Version 1.0, November 2014Google Scholar
- 23.Felici M, Pearson S (eds) (2014) Report detailing conceptual framework. Deliverable D32.1, A4CloudGoogle Scholar
- 24.Felici M, Pearson S (2014) Accountability, risk, and trust in cloud services: towards an accountability-based approach to risk and trust governance. In: Proceedings of Services, IEEE, pp 105–112Google Scholar
- 25.Gittler F et al (2015) Initial reference architecture. Deliverable 42.3, A4CloudGoogle Scholar
- 26.Hildebrandt M (ed) (2009) Behavioural biometric profiling and transparency enhancing tools, D 7.12, FIDISGoogle Scholar
- 27.International Data Corporation (IDC) (2012) Quantitative estimates of the demand of cloud computing in EuropeGoogle Scholar
- 28.International Organization for Standardization (ISO) (2014) (Draft) Information technology – cloud computing – service level agreement (SLA) framework and terminology. ISO/IEC 19086Google Scholar
- 29.ISO (2014) Information technology – security techniques: guidelines on information security controls for the use of Cloud computing services based on ISOIEC 27002. ISOIEC 27002Google Scholar
- 30.Jansen W (2010) Directions in security metrics research. TR-7564. NISTGoogle Scholar
- 31.JBoss: Drools business rules management system solution. Available via http://www.drools.org/
- 32.Kavanagh KM, Nicolett M, Rochford O (2014) Magic quadrant for security information and event management. GartnerGoogle Scholar
- 33.Luna J, Langenberg R, Suri N (2012) Benchmarking cloud security level agreements using quantitative policy trees. In: Proceeding of the Cloud Computing Security workshop, ACMGoogle Scholar
- 34.Mell P, Grance T (2011) The NIST definition of cloud computing, NIST Special Publication 800-145, September 2011Google Scholar
- 35.National Institute of Standards and Technology (NIST) (2002) Risk management guide for information technology systems. SP 800-30. NISTGoogle Scholar
- 36.NIST (2010) Guide for applying the risk management framework to federal information systems. SP 800-37. NISTGoogle Scholar
- 37.NIST (2013) Cloud computing security reference architecture. NIST SP 500-299, vol 1Google Scholar
- 38.NIST (2014a) (Draft) Cloud computing: cloud service metrics description. Public RATAX WG, NISTGoogle Scholar
- 39.NIST (2014b) Cloud-adapted risk management framework. Draft NIST SP 800-173Google Scholar
- 40.Nymity Inc (2014) Privacy management accountability frameworkGoogle Scholar
- 41.Organisation for Economic Co-operation and Development (OECD) (2013) Guidelines concerning the protection of privacy and transborder flows of personal dataGoogle Scholar
- 42.Office of the Information and Privacy Commissioner of Alberta, Office of the Privacy Commissioner of Canada, Office of the Information and Privacy Commissioner for British Colombia (2012) Getting accountability right with a privacy management program, April 2012Google Scholar
- 44.Pearson S (2014) Accountability in cloud service provision ecosystems. In: Secure IT systems, LNCS, vol 8788, Springer, pp 3–24Google Scholar
- 46.Pulls T, Martucci L (2014) User-centric transparency tools. D-5.2, vol 1, A4CloudGoogle Scholar
- 47.Ruebsamen T, Pulls T, Reich C (2015) Secure evidence collection and storage for cloud accountability audits. In: Proceedings of CLOSER 2015, Lisbon, Portugal, 20–22 May 2015Google Scholar
- 48.Stoneburner G, Hayden C, Feringa A (2004) Engineering principles for information technology security (A baseline for achieving security). SP800-27, NISTGoogle Scholar
- 49.Telecom Italia: Java Agent Development Environment (JADE). http://jade.tilab.com
- 50.Telecom Italia: JADE Agent Communication Language (ACL) (2005). Retrieved from http://jade.tilab.com/doc/api/jade/lang/acl/package-summary.html
- 51.Wang C, Zhou Y (2010) A collaborative monitoring mechanism for making a multitenant platform accountable. In: Proceedings of HotCloud. Available from https://www.usenix.org/legacy/event/hotcloud10/tech/full_papers/WangC.pdf
- 52.Wlodarczyk, Tomasz et al (2014) A4Cloud project: DC-8.1 framework of evidence. A4CloudGoogle Scholar