From Regulatory Obligations to Enforceable Accountability Policies in the Cloud

  • Walid Benghabrit
  • Hervé Grall
  • Jean-Claude Royer
  • Mohamed Sellami
  • Monir Azraoui
  • Kaoutar Elkhiyaoui
  • Melek Önen
  • Anderson Santana De Oliveira
  • Karin Bernsmed
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 512)


The widespread adoption of the cloud model for service delivery triggered several data protection issues. As a matter of fact, the proper delivery of these services typically involves sharing of personal/business data between the different parties involved in the service provisioning. In order to increase cloud consumer’s trust, there must be guarantees on the fair use of their data. Accountability provides the necessary assurance about the data governance practices to the different stakeholders involved in a cloud service chain. In this context, we propose a framework for the representation of accountability policies. Such policies offer to end-users a clear view of the privacy and accountability clauses asserted by the entities they interact with, as well as means to represent their preferences. Our framework offers two accountability policy languages: (i) an abstract language called AAL devoted for the representation of preferences/clauses in an human readable fashion, and (ii) a concrete one for the implementation of enforceable policies.


Accountability Data protection Framework Policy language Policy enforcement 

1 Introduction

According to [1], accountability regards the data stewardship regime in which organizations that are entrusted with personal and business confidential data are responsible and liable for processing, sharing, storing and otherwise using the data according to contractual and legal constraints from the time it is collected until when the data is destroyed (including onward transfers to third parties). Obligations associated to such responsibilities can be expressed in an accountability policy, which is a set of rules that defines the conditions under which an accountable entity must operate.

Today, there is neither an established standard for expressing accountability policies nor a well defined way to enforce these policies. Since cloud services often combine infrastructure, platform and software applications to aggregate value and propose new cloud applications to individuals and organizations, it is fundamental for an accountability policy framework to enable “chains of accountability” across cloud services addressing regulatory, contractual, security and privacy concerns.

In the context of the EU FP7 A4Cloud project1 we are currently working on defining a framework where accountability policies will be enforceable across the cloud service provision chain by means of accountability services and tools. Accountable organizations will make use of these services to ensure that obligations to protect personal data and data subjects’ rights2 are observed by all who store and process the data, irrespective of where that processing occurs. Under the perspective of the concept of accountability, we have elicited the following types of accountability obligations that must be considered while designing our policy framework:
  • Access and Usage Control rules - express which rights should be granted or revoked regarding the use and the distribution of data in cloud infrastructures, and support the definition of roles as specified in the Data Protection Directive, e.g. data controller and data processor.

  • Capturing privacy preferences and consent - to express user preferences about the usage of their personal data, to whom data can be released, and under which conditions.

  • Data Retention Periods - to express time constraints about personal data collection.

  • Controlling Data Location and Transfer - clear whereabouts of location depending on the type of data stored and on the industry sector processing the data (subject to specific regulations) must be provided. Accountability policies for cloud services need to be able to express rules about data localization, such that accountable services can signal where the data centers hosting them are located. Here we consider strong policy binding mechanisms to attach policies to data.

  • Auditability - Policies must describe the clauses in a way that actions taken upon enforcing the policy can be audited in order to ensure that the policy was adhered to. The accountability policy language must specify which events have to be audited and what information related to the audited event have to be considered.

  • Reporting and notifications - to allow cloud providers to notify end-users and cloud customers in case of policy violation or incidents for instance.

  • Redress - express recommendations for redress in the policy in order to set right what was wrong and what made a failure occur.

In this paper we provide a cloud accountability policy representation framework designed while considering the aforementioned requirements. We define an abstract yet readable language, called AAL, for accountability clauses representation in a human readable fashion. We also define a concrete policy enforcement language, called A-PPL, as an extension of the PPL [3] language. The proposed framework, offers the means for a translation from abstract clauses expressed in AAL to concrete policies in A-PPL.

The rest of this paper is organized as follows. Section 2 describes related work. Section 3 gives an overview on the main components of our policy representation framework. We present the abstract accountability policy language we propose in Sect. 4 and the concrete one in Sect. 5. Section 6 describes a realistic use case as a proof of concept to our work. Section 7 discusses our work and presents directions for future work.

2 Related Work

In the following, we provide an overview of related work in the field. We organize this section along the following categories that relate to our contribution in this paper: accountability in computer science, obligations in legal texts and directives, enforcement and policy languages.

2.1 Accountability

There is a recent interest and active research for accountability which overlap several domains like security [4, 5, 6], language representation [7, 8], auditing systems [9, 10], evidence collection [11, 12] and so on. However, only few of them consider an interdisciplinary view of accountability taking into account legal and business aspects. We particularly emphasize the work from [6, 9] since they provide a general, concrete view and yet an operational approach.

Regarding tool supports and frameworks we can find several proposals [12, 13, 14], but none of them provides a holistic approach for accountability in the cloud, from end-user understandable sentences to concrete machine-readable representations. In [11], authors propose an end-to-end decentralized accountability framework to keep track of the usage of the data in the cloud. They suggest an object-centered approach that packs the logging mechanism together with users’ data and policies.

2.2 Obligations in Regulations

There is an international trend in protecting data, for instance in Europe with Directive 95/46/EC [2], the HIPAA rules [15] in the USA and the FIPPA act [16] in Canada. As an example, Directive 95/46/EC states rules to protect personal data in case of processing or transferring data to other countries. There exist some attempts to formalize or to give rigorous analyses of this kind of rules. In [7] the authors present a restricted natural language SIMPL (SIMple Privacy Language) to express privacy requirements and commitments. In [17] the authors describe a general process for developing semantic models from privacy policy goals mined from policy documents. In [18], the authors develop an approach where contracts are represented by XML documents enriched with logic metadata and assistance with a theorem prover. In [8] the authors provide a formal language to express privacy laws and a real validation on the HIPAA and GLBA  [19] sets. These works either are not end-to-end proposals, only cover data privacy not accountability or are only formal proposals without an enforcement layer.

2.3 Enforcement and Policies

A number of policy languages have been proposed in recent years for machine-readable policy representation. We reviewed several existing policy languages (see [20] for details). We present here the results of our analysis of existing policy languages. Rather than designing a new policy language for accountability, we leverage a concrete existing language that covers most of the accountability obligations listed in Sect. 1. We aim at extending it with features that enable the expression of accountability policies.

The first step of our analysis is to check to which extent the selected policy languages can express the accountability obligations. None of the languages we reviewed enables the expression of all these obligations. However, they may focus on one or several of the accountability obligations. For example, PPL [3] can be used to express access and usage control rules, privacy preferences and policies and it enables the notifications. From this first analysis, we classify our selected policy languages into four categories: (i) Access Control: eXtensible Access Control Markup Language (XACML, [21]); (ii) Privacy: The Platform for Privacy Preferences (P3P, [22]), the Primelife Policy Language (PPL, [3]) and SecPal for Privacy (SecPal4P,  [23]); (iii) Policy specification for security: Conspec [24] and Ponder [25]; (iv) Service Description: The Unified Service Description Language (USDL, [26]), SLAng [27] and WS-Policy [28, 29]. Note that one language can belong to several categories. We also argue that we cannot define from our selected set of languages an additional category Accountability language. In particular, most of the languages do not provide means to express logging, reporting and audit obligations that are essential to support accountability. Therefore, the design of the accountability language we propose in the following sections represents an unprecedented attempt to express accountability obligations via a policy language.

In a second step, we analyze the extensibility of the reviewed languages in order to extend one of the languages with accountability features. We focus on XML-based languages, since XML [30] provides many extension points to extend the language. In addition, XML is a standard and well documented. Thus adding extension to an XML-based language is fairly simple. Languages such as XACML, P3P and PPL use XML to define policies related to access control and privacy. In our work we consider PPL since it provides elements that capture the best accountability obligations.

As a result of this survey, we focus our effort on the extension of PPL. Extending PPL for accountability has also been investigated on other papers, concurrently and independently from our work. Contemporaneous work by Butin et al. [31] leverages PPL to design logs for accountability. They identify the lack of expressiveness of PPL as far as the creation of logs for accountability is concerned. They also discuss the fact that the PPL does not allow to specify the content of the notification in a data handling policy. Similarly, Henze et al. [32] identify location of storage and duration of storage as the two main challenges in cloud data handling scenarios. They propose to use PPL to specify data annotations that contain the data handling obligations (e.g. “delete after 30 days”). Without giving more details, they propose to extend PPL with an attribute that specifies a maximum and a minimum duration of storage and with an element that restricts the location of stored data. Our proposed language also addresses these two challenges and we give in Sect. 5 the details of the extensions that solve these issues.
Fig. 1.

Overview on the accountability policy representation framework.

3 The Cloud Accountability Framework

In this section, we provide an overview of our proposed policy representation framework. Such framework must allow end-users to easily express their accountability clauses and preferences and even be complete and rigorous enough to be run by a policy execution engine. Hence we are faced with the following dilemma: the policy must be written by an end-user, which does not necessarily have skills in a certain policy language and the policy must be machine understandable at the same time. Machine understandable means that sentences can be read, understood and executed by a computer.

In this context, we propose a policy representation framework (see Fig. 1) that allows a user, step (1) in Fig. 1, to express his accountability needs in a human readable fashion and (2) offers the necessary means to translate them (semi-)automatically3 to a machine understandable format.

Accountability as it appears in legal, contractual and normative texts about data privacy make explicit four important roles that we consider in our proposal:
  • Data subject: this role represents any end-user which has data privacy concerns, mainly because he outsourced some of its data to a cloud provider.

  • Data processor: this role is attributed to any computational agent which processes some personal data. It should act under the control of a data controller.

  • Data controller: it is legally responsible to the data subject for any violations of its privacy and to the data protection authority in case of misconduct.

  • Auditor: it represents data protection authorities which are in charge of the application of laws and directives.

3.1 Step (1). Human/Machine Readable Representation

To express accountability clauses we define an Abstract Accountability Language (AAL), which is devoted to expressing accountability clauses in an unambiguous style and which is close to what the end-user needs and understands. As this is the human readable level, this language should be simple, akin to a natural logic, that is a logic expressed in a subset of a natural language.

For instance, a simple access control clause to state that “the data d cannot be read by all agents” will be formulated in a human/machine readable fashion using our accountability language as “DENY ANY:Agent.READ(d:Data)”. Details on the AAL syntax are provided in Sect. 4.

3.2 Step (2). Machine Understandable Representation

In this step (called the mapping), the accountability clauses expressed in AAL are (semi-)automatically translated into a machine understandable policy. We target a policy language that is able to enforce classic security means (like access or usage controls) but also accountability clauses. Such automatic translation may need several passes, due to the high level of abstraction of AAL.

As analyzed in Sect. 2, the PrimeLife Policy Language (PPL) [3] seems the most convenient language for privacy policies representation. It can be extended to address specific accountability obligations such as auditability, notification or logging obligations. Hence, we propose an extension to PPL, A-PPL for accountable PPL, which supports such obligations. The details of this extension are described in Sect. 5.

4 Abstract Language

We introduce in this section AAL (Abstract Accountability Language), which is devoted to expressing accountability clauses in an unambiguous human readable style. The AAL concepts are presented in Sect. 4.1, its syntax in Sect. 4.2 and we provide an outlook on our approach for a machine understandable representation of AAL policies in Sect. 4.4.

4.1 AAL Concepts

As explained in [9] an accountable system can be defined with five steps: prevention, detection, evidence collection, judgment and punishment. We follow this line for the foundation of our accountability language. In AAL, usage control expressions represent the preventive description part. Audit expressions encompass the detection, evidence collection and judgment parts. Finally, rectification expressions represent the punishment description part. We use the term rectification since these expressions don’t cover only punishment, but also remediation, compensation, sanction and penalty. Thereby, an AAL sentence is a property (more formally a distributed system invariant) expressing usage control, auditing and rectification. The general form of an AAL sentence is: UsageControl Auditing Rectification and the informal meaning is: try to ensure the usage control, in case of an audit, if a violation is observed then the rectification applies. The reader should also note that there are two flavors of AAL sentences:
  • User preferences: expressing the clauses a data subject wants to be satisfied, for instance he does not want its data to be distributed over the network or only used for statistics by a given data processor, and so on.

  • Processor clauses: these are the clauses the data processor declares to ensure regarding the data management and processing.

Finally, as many policy representation languages, we consider permission, obligation and prohibition in AAL. They occur in various approaches, like in PPL, or in the ENDORSE project4. Permission, obligation and prohibition are respectively expressed in AAL sentences with these keywords: PERMIT, MUST and MUSTNOT/DENY, as advocated by the IETF RFC 2119 [33].

4.2 AAL Syntax

Figure 2 shows the syntax of AAL using a Backus-Naur Form (BNF) [34] like syntax. AAL allows the expression of Clauses representing actions that have to be met either in an accountability policy or preference. A Clause has one usage expression and optionally an audit and a rectification expression: Open image in new window. The expression ActionExp of a clause can be either atomic or composite.
Fig. 2.

Excerpt of the AAL Syntax.

As an example, consider the user preference of a data subject who grants read access to an agent A on its data D. This usage control is a permission, which can be expressed as follows.
But the full accountability sentence does imply that an auditor B will audit the system and, in case of violations, can sanction the data controller C.

Further examples of user preferences and clauses expressed in AAL are provided in Sect. 6.

4.3 Semantics and Verification

We define formal semantics for AAL based on a pure temporal logic approach. However, since we need data and agent quantification we precesiley rely on a first-order linear temporal logic [35]. The translation is rather straightforward, except that the audit process introduces a new modality with a specific formal interpretation (see [36] for details). From this interpretation it is possible to check the consistency of an AAL formula or the compliance of two AAL clauses using a logical prover. This last feature is explicitly suitable to match user preferences and processor clauses. Furthermore, principles for abstract component design have been defined and can be checked with a model-checker. We present these design principles in [37], how to translate them in the \(\mu \)-calculus and how to verify AAL clauses with the mCRL2 toolset [38].

4.4 Machine Understandability

Generating machine understandable policies from accountability preferences and clauses written in AAL can be easily done when dealing with usage control clauses. However, this mapping is less obvious for clauses with temporal modalities and with auditing. The main issue for such mapping is the gap between the AAL language, which is property-oriented, and the machine understandable language, which is operational. To fill this gap we need more artifacts, Fig. 3 provides an overview on our proposed mapping process.
Fig. 3.

Overview on the machine understandable translation of AAL.

According to this figure, we can see that going from a human/machine readable representation in AAL to a machine understandable representation of the accountability preferences/clauses (arrow numbered (2) in Fig. 3) is done through three steps:
  • (2’.1). First, a temporal logic is used to make more concrete AAL sentences as temporal logic properties. Indeed, in an accountability policy we should represent the notions of permission, obligation and prohibition. In addition, there is a need to express conditions and various logical combinations. Furthermore, one important thing is to have time, at least logical discrete time, for instance to write: “X writes some data and then stores some logs”. Our target is a temporal logic with time, one concrete candidate is mCRL2 [38].

  • (2’.2). Second, a policy calculus is used to describe the operational semantics associated to the concrete properties defined in (2’.1). This calculus is based on the concept of reference monitor [39] for both the agents and the data resources. It relies on a previous work for distributed agent communicating via messages [40]. This operational semantics provides means for abstractly executing the temporal logic expressions. This process is known as “program synthesis”, starting from a property it generates a program ensuring/enforcing the property.

  • (2’.3). Finally, the generated policy using our policy calculus is (semi-)automatically translated to a machine understandable policy based on predefined transformation rules. Our target is the A-PPL extension of PPL which is described in the next section.

5 Concrete Language

Our accountability policy representation framework maps AAL clauses to concrete and operational machine understandable policies. As already mentioned in Sect. 2.3, in order not to define yet another completely new language to map accountability obligations to machine-understandable policies, we conducted a preliminary study on existing languages and among all the possible candidates, PPL seems the one that best captures the accountability concepts. Therefore, in this section, we present how A-PPL extends PPL to address accountability obligations.

PPL implicitly identifies three roles: the data subject, data controller, third-party data controller and data processor roles. Besides, PPL defines an obligation as a set of triggers and actions. Triggers are events related to the obligation that are filtered by a condition and that trigger the execution of actions. Therefore, PPL defines markups to declare an obligation. Inside the obligation environment, one can specify a set of triggers and their related actions.

5.1 Extension of Roles

To address accountability concerns in a cloud environment, it might be necessary to include in the policy a reference to the role of the subject to which the policy is applied to. For instance, in PPL, it was not possible to identify the data controller. We therefore suggest adding to the PPL <Subject> element a new attribute, attribute:role, for this purpose. Furthermore, in addition to the four roles PPL inherently considers (data subject, data controller, downstream data controller, data processor), A-PPL extends PPL with one additional role. We add the auditor role that is considered as a trusted third-party that can conduct independent assessment of cloud services, information systems operations performance and security of the cloud implementation. This new role is important to catch accountability specific obligations such as auditability, reporting notification and possibly redress.

5.2 Extension of Actions and Triggers

We add to PPL a set of new A-PPL actions and triggers in order to map accountability obligations. We introduce two new triggers that relate to access control. We propose TriggerPersonalDataAccessPermitted and TriggerPersonalDataAccessDenied. They fire actions based on the result of an access decision taken on a piece of data. In other words, if the evaluation of the access control on the targeted data results is Permit, TriggerPersonalDataAccessPermitted may trigger an action specified in the policy. Symmetrically, TriggerPersonalDataAccessDenied triggers actions when an access on a piece of data is denied. We also enhance the action of logging ActionLog and notification ActionNotify that already exist in PPL. For instance, while PPL currently enables notification thanks to the ActionNotifyDataSubject, A-PPL defines a new and more general ActionNotify action in which one can define the recipient of the notification thanks to a newly defined parameter recipient. Moreover, the additional notification type parameter defines the purpose of the notification which can be, for example, policy violation, evidence or redress notification. On the other hand, the current ActionLog action in PPL fails to capture accountability obligations. The new ActionLog action in A-PPL introduces many additional parameters to provide more explicit information on the logged event. For example, timestamp defines the time of the event, and Resource location identifies the resource the action was taken on. We also create two actions related to auditability: ActionAudit that creates an evidence request and ActionEvidenceCollection that collects requested evidence. In addition, auditability requires the definition of two new triggers related to evidence: TriggerOnEvidenceRequestReceived that occurs when an audited receives an evidence request and TriggerOnEvidenceReceived that occurs when an auditor receives the requested evidence. Similarly, when an update occurs in a policy or in a user preference, the update may trigger a set of actions to be performed. Thus, we create two additional triggers: TriggerOnPolicyUpdate and TriggerOnPreferenceUpdate. Finally, to handle complaints that a data subject may file in the context of remediability, we define the trigger TriggerOnComplaint that triggers a set of specific actions to be undertaken by an auditor or/and a data controller.

6 Validation

In this section we validate our policy representation framework by extracting obligations from one of the use cases documented in the A4Cloud public deliverable DB3.1 [41] and illustrate their representation in AAL and A-PPL.

6.1 The Health Care Use Case

This use case concerns the flow of health care information generated by medical sensors in the cloud. The system, which is illustrated in Fig. 4, is used to support diagnosis of patients by the collection and processing of data from wearable sensors. Here, we investigate the case where medical data from the sensors will be exchanged between patients, their families and friends, the hospital, as well as between the different Cloud providers involved in the final service delivery.
Fig. 4.

An overview over the main actors involved in the health care use case.

In this use case the patients are the data subjects from whom personal data is collected. The hospital is ultimately responsible for the health care services and will hence act as one of the data controllers for the personal data that will be collected. The patients’ relatives may also upload personal data about the patients and can therefore be seen as data controllers (as well as data subjects, when personal data about their usage of the system is collected from them). As can be seen in Fig. 4, the use case will involve cloud services for sensor data collection and processing (denoted cloud provider “X”), cloud services for data storage (denoted cloud provider “Y”) and cloud services for information sharing (denoted cloud provider “M”), which will be operated by a collaboration of different providers. Since the primary service provider M, with whom the users will interface, employs two sub-providers, a chain of service delivery will be created. In this particular case, the M platform provider will be the primary service provider and will act as a data processor with respect to the personal data collected from the patients. Also the sub-providers, X and Y, will act as data processors. The details of the use case are further described in [41].

6.2 Obligations for the Use Case

We have identified a number of obligations for this use case, which needs to be handled by the accountability policy framework. Here we list three examples and we explain how they will be expressed in AAL and mapped into A-PPL. Note that the complete list of obligations is much longer, but we have chosen to outline those that illustrate the most important relationships between the involved actors. Due to space limitations we do not include the complete A-PPL policies here; the reader is referred to the project documentation [20] to see the full policy expressions.

Obligation 1: The Data Subject’s Right to Access, Correct and Delete Personal Data. According the Data Protection Directive [2], data subjects have (among others) the right to access, correct and delete personal data that have been collected about them. In this use case it means that the hospital must allow read and write grant access to patients as well as relatives with regard to their personal data that have been collected and stored in the cloud. There must be also means to enforce the deletion of such data.

The AAL expression catching the clauses associated to the patient is:

The condition D.subject=Patient expresses that this clause only concerns the personal data of the Patient. This clause also expresses the audit and rectification obligations that have to be ensured by an external Auditor.

Using the accountability policy representation framework, the AAL expression will be mapped into two different A-PPL expressions; one for permitting read and write access to the patients and another one for enforcing the data controller to delete the personal data whenever requested. Read and write access control is achieved through XACML rules. Regarding deletion of data, a patient can express data handling preferences that specify the obligation that the data controller has to enforce to delete the personal data. This obligation can be expressed using the A-PPL obligation action ActionDeletePersonalData, which will be used by the patient to delete personal data that has been collected about him.

An explicit audit clause implies that information related to the usage control property are logged (the amount and the nature of this information is not discussed here). Thus the audit clause is translated into an AuditAction which is responsible to manage the interaction with the auditor. This runs an exchange protocol with the auditor which ends with two responses: either no violation of the usage has been detected or a violation exists. In the latter, some rectification clauses should be specified.

In the sequel we only consider usage control clauses since the translation process for audit and rectification is similar to the previous example.

Obligation 2: The Data Controller Must Notify the Data Subjects of Security or Personal Data Breaches. This obligation defines what will happen in case of a security or privacy incident. In AAL it will be expressed by the hospital as:

In A-PPL, such notification is expressed through the obligation action ActionNotify. It takes as parameters, the recipient of the notification (here, the data subject) and the type of the notification (here, security breach).

Obligation 3: The Data Processor Must, upon Request, Provide Evidence to the Data Controller on the Correct and Timely Deletion of Personal Data. To express the timely deletion of personal data, which in addition will be logged to be used as evidence, the following AAL expression can be used by the provider M:

In A-PPL, the obligation trigger TriggerPersonalDataDeleted combined with the obligation action ActionNotify will notify the data subject of the deletion of its data. In addition, if necessary, the obligation action ActionLog will allow the provider M to log when personal data have been deleted.

The three examples that we have provided in this section represent a snapshot of the full power of AAL and A-PPL. In [41] we outline more examples of obligations for the health care use case, which among other things demonstrate how informed consent can be gathered from the patients before their data is being processed, how the purpose of personal data collection can be specified and controlled, how the data processor M can inform the hospital of the use of sub-processors and how the data processors can facilitate for regulators to review evidence on their data processing practices.

7 Conclusions

Dealing with personal data in the cloud raises several accountability and privacy issues that must be considered to promote the safety usage of cloud services. In this paper we tackle the issue related to accountability clauses and preferences representation. We propose a cloud accountability policy representation framework. This framework enables accountability policy expression in a human readable fashion using our abstract accountability language (AAL). Also, it offers the means for their mapping to concrete enforcement policies written using our accountability policy language (A-PPL). Our framework applies the separation of concerns principle by separating the abstract language from the concrete one. This choice makes both contributions, i.e. AAL and A-PPL, self-contained and allows their independent use. The ability of our framework to represent accountability clauses/preferences was validated through a realistic use case.

Our future research work will focus on the mapping from AAL to A-PPL. As part of our implementation perspectives, we are currently working on two prototypes. An AAL editor that assists end-users in writing their preferences/clauses and implements the required artifacts to map them to concrete policies in A-PPL. We also started the development of an A-PPL policy execution engine that will be in charge of interpreting and matching A-PPL policies and preferences


  1. 1.

    The Cloud Accountability Project:

  2. 2.

    This work mainly focus on the European Data Protection directive [2].

  3. 3.

    Here “semi” means that sometimes human assistance could be needed.

  4. 4.



This work was funded by the EU’s 7th framework A4Cloud project.


  1. 1.
    Pearson, S., Tountopoulos, V., Catteddu, D., Südholt, M., Molva, R., Reich, C., Fischer-Hübner, S., Millard, C., Lotz, V., Jaatun, M.G., Leenes, R., Rong, C., Lopez, J.: Accountability for cloud and other future internet services. In: CloudCom, pp. 629–632. IEEE (2012)Google Scholar
  2. 2.
    Directive, E.U.: Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data (1995).
  3. 3.
    Ardagna, C.A., Bussard, L., De Capitani Di Vimercati, S., Neven, G., Paraboschi, S., Pedrini, E., Preiss, S., Raggett, D., Samarati, P., Trabelsi, S., Verdicchio, M.: Primelife policy language (2009).
  4. 4.
    Weitzner, D.J., Abelson, H., Berners-Lee, T., Feigenbaum, J., Hendler, J., Sussman, G.J.: Information accountability. Commun. ACM 51, 82–87 (2008)CrossRefGoogle Scholar
  5. 5.
    Xiao, Z., Kathiresshan, N., Xiao, Y.: A survey of accountability in computer networks and distributed systems. Secur. Commun. Netw. 5, 1083–1085 (2012)CrossRefGoogle Scholar
  6. 6.
    Pearson, S., Wainwright, N.: An interdisciplinary approach to accountability for future internet service provision. Int. J. Trust Manag. Comput. Commun. 1, 52–72 (2013)CrossRefGoogle Scholar
  7. 7.
    Le Métayer, D.: A formal privacy management framework. In: Degano, P., Guttman, J., Martinelli, F. (eds.) FAST 2008. LNCS, vol. 5491, pp. 162–176. Springer, Heidelberg (2009) CrossRefGoogle Scholar
  8. 8.
    DeYoung, H., Garg, D., Jia, L., Kaynar, D., Datta, A.: Experiences in the logical specification of the HIPAA and GLBA privacy laws. In: 9th Annual ACM Workshop on Privacy in the Electronic Society (WPES 2010), pp. 73–82 (2010)Google Scholar
  9. 9.
    Feigenbaum, J., Jaggard, A.D., Wright, R.N., Xiao, H.: Systematizing “accountability” in computer science. Technical report YALEU/DCS/TR-1452, University of Yale (2012)Google Scholar
  10. 10.
    Jagadeesan, R., Jeffrey, A., Pitcher, C., Riely, J.: Towards a theory of accountability and audit. In: Backes, M., Ning, P. (eds.) ESORICS 2009. LNCS, vol. 5789, pp. 152–167. Springer, Heidelberg (2009) CrossRefGoogle Scholar
  11. 11.
    Sundareswaran, S., Squicciarini, A., Lin, D.: Ensuring distributed accountability for data sharing in the cloud. IEEE Trans. Dependable Secure Comput. 9, 556–568 (2012)CrossRefGoogle Scholar
  12. 12.
    Haeberlen, A., Aditya, P., Rodrigues, R., Druschel, P.: Accountable virtual machines. In: 9th USENIX Symposium on Operating Systems Design and Implementation, OSDI, pp. 119–134 (2010)Google Scholar
  13. 13.
    Wei, W., Du, J., Yu, T., Gu, X.: Securemr: a service integrity assurance framework for mapreduce. In: Proceedings of the 2009 Annual Computer Security Applications Conference, pp. 73–82. IEEE Computer Society, Washington, DC (2009)Google Scholar
  14. 14.
    Zou, J., Wang, Y., Lin, K.J.: A formal service contract model for accountable SaaS and cloud services. In: International Conference on Services Computing, pp. 73–80. IEEE (2010)Google Scholar
  15. 15.
    US Congress: Health insurance portability and accountability act of 1996, privacy rule. 45 cfr 164 (2002).
  16. 16.
    Legislative Assembly of Ontario: Freedom of information and protection of privacy act (r.s.o. 1990, c. f.31) (1988)Google Scholar
  17. 17.
    Breaux, T.D., Anton, A.I.: Deriving semantic models from privacy policies. In: Sixth IEEE International Workshop on Policies for Distributed Systems and Networks (POLICY 2005), pp. 67–76 (2005)Google Scholar
  18. 18.
    Kerrigan, S., Law, K.H.: Logic-based regulation compliance-assistance. In: International Conference on Artificial Intelligence and Law, pp. 126–135 (2003)Google Scholar
  19. 19.
    US Congress: Gramm-leach-bliley act, financial privacy rule. 15 usc 6801–6809 (1999).
  20. 20.
    Garaga, A., de Oliveira, A.S., Sendor, J., Azraoui, M., Elkhiyaoui, K., Molva, R., Önen, M., Cherrueau, R.A., Douence, R., Grall, H., Royer, J.C., Sellami, M., Südholt, M., Bernsmed, K.: Policy Representation Framework. Technical report D:C-4.1, Accountability for Cloud and Future Internet Services - A4Cloud Project (2013).
  21. 21.
    OASIS Standard: eXtensible Access Control Markup Language (XACML) Version 3.0. 22, January 2013.
  22. 22.
    Marchiori, M.: The platform for privacy preferences 1.0 (P3P1.0) specification. W3C recommendation, W3C (2002). TR/ 2002/ REC-P3P-20020416/
  23. 23.
    Becker, M.Y., Malkis, A., Bussard, L.: S4p: A generic language for specifying privacy preferences and policies. Technical report MSR-TR-2010-32, Microsoft Research (2010)Google Scholar
  24. 24.
    Aktug, I., Naliuka, K.: ConSpec - a formal language for policy specification. Electron. Notes Theor. Comput. Sci. 197, 45–58 (2008)CrossRefMATHGoogle Scholar
  25. 25.
    Damianou, N., Dulay, N., Lupu, E.C., Sloman, M.: The ponder policy specification language. In: Sloman, M., Lobo, J., Lupu, E.C. (eds.) POLICY 2001. LNCS, vol. 1995, pp. 18–38. Springer, Heidelberg (2001) CrossRefGoogle Scholar
  26. 26.
    Barros, A., Oberle, D.: Handbook of Service Description: USDL and Its Methods. Springer Publishing Company, Incorporated, New York (2012)CrossRefGoogle Scholar
  27. 27.
    Lamanna, D.D., Skene, J., Emmerich, W.: SLAng: a language for defining service level agreements. In: Proceedings of the The Ninth IEEE Workshop on Future Trends of Distributed Computing Systems, pp. 100–106. IEEE Computer Society, Washington, DC (2003)Google Scholar
  28. 28.
    OASIS Web Service Security (WSS) TC: Web Services Security: SOAP Message Security 1.1 (2006).
  29. 29.
    OASIS Web Services Secure Exchange (WS-SX) TC: WS-Trust 1.4 (2012).
  30. 30.
    Bray, T., Paoli, J., Sperberg-McQueen, C.M., Maler, E., Yergeau, F.: Extensible markup language (XML). World Wide Web J. 2, 27–66 (1997)Google Scholar
  31. 31.
    Butin, D., Chicote, M., Le Métayer, D.: Log design for accountability. In: IEEE CS Security and Privacy Workshops (SPW), pp. 1–7 (2013)Google Scholar
  32. 32.
    Henze, M., Großfengels, M., Koprowski, M., Wehrle, K.: Towards data handling requirements-aware cloud computing. In: 2013 IEEE International Conference on Cloud Computing Technology and Science (CloudCom) (2013)Google Scholar
  33. 33.
    Bradner, S.: IETF RFC 2119: Key words for use in RFCs to Indicate Requirement Levels. Technical report (1997)Google Scholar
  34. 34.
    Knuth, D.E.: Backus normal form vs. backus naur form. Commun. ACM 7, 735–736 (1964)CrossRefGoogle Scholar
  35. 35.
    Fisher, M.: Temporal representation and reasoning. In: van Harmelen, F., Lifschitz, V., Porter, B. (eds.) Handbook of Knowledge Representation, pp. 513–550. Elsevier, Amsterdam (2008)CrossRefGoogle Scholar
  36. 36.
    Benghabrit, W., Grall, H., Royer, J.-C., Sellami, M., Bernsmed, K., De Oliveira, A.S.: Abstract accountability language. In: Zhou, J., Gal-Oz, N., Zhang, J., Gudes, E. (eds.) IFIPTM 2014. IFIP AICT, vol. 430, pp. 229–236. Springer, Heidelberg (2014) Google Scholar
  37. 37.
    Benghabrit, W., Grall, H., Royer, J.C., Sellami, M.: Accountability for abstract component design. In: 40th EUROMICRO Conference on Software Engineering and Advanced Applications, SEAA, Verona, Italia (2014)Google Scholar
  38. 38.
    Cranen, S., Groote, J.F., Keiren, J.J.A., Stappers, F.P.M., de Vink, E.P., Wesselink, W., Willemse, T.A.C.: An overview of the mCRL2 toolset and its recent advances. In: Piterman, N., Smolka, S.A. (eds.) TACAS 2013 (ETAPS 2013). LNCS, vol. 7795, pp. 199–213. Springer, Heidelberg (2013) CrossRefGoogle Scholar
  39. 39.
    Schneider, F.B.: Enforceable security policies. ACM Trans. Inf. Syst. Secur. 3, 30–50 (2000)CrossRefGoogle Scholar
  40. 40.
    Allam, D., Douence, R., Grall, H., Royer, J.C., Südholt, M.: Well-Typed Services Cannot Go Wrong. Rapport de recherche RR-7899, INRIA (2012)Google Scholar
  41. 41.
    Bernsmed, K., Felici, M., Oliveira, A.S.D., Sendor, J., Moe, N.B., Rübsamen, T., Tountopoulos, V., Hasnain, B.: Use case descriptions. Deliverable, Cloud Accountability (A4Cloud) Project (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Walid Benghabrit
    • 1
  • Hervé Grall
    • 1
  • Jean-Claude Royer
    • 1
  • Mohamed Sellami
    • 5
  • Monir Azraoui
    • 2
  • Kaoutar Elkhiyaoui
    • 2
  • Melek Önen
    • 2
  • Anderson Santana De Oliveira
    • 3
  • Karin Bernsmed
    • 4
  1. 1.Mines NantesNantesFrance
  2. 2.EURECOMBiot, Sophia AntipolisFrance
  3. 3.SAP Labs FranceMougins, Sophia AntipolisFrance
  4. 4.SINTEF ICTTrondheimNorway
  5. 5.ISEPIssy Les MoulineauxFrance

Personalised recommendations