From Regulatory Obligations to Enforceable Accountability Policies in the Cloud
The widespread adoption of the cloud model for service delivery triggered several data protection issues. As a matter of fact, the proper delivery of these services typically involves sharing of personal/business data between the different parties involved in the service provisioning. In order to increase cloud consumer’s trust, there must be guarantees on the fair use of their data. Accountability provides the necessary assurance about the data governance practices to the different stakeholders involved in a cloud service chain. In this context, we propose a framework for the representation of accountability policies. Such policies offer to end-users a clear view of the privacy and accountability clauses asserted by the entities they interact with, as well as means to represent their preferences. Our framework offers two accountability policy languages: (i) an abstract language called AAL devoted for the representation of preferences/clauses in an human readable fashion, and (ii) a concrete one for the implementation of enforceable policies.
KeywordsAccountability Data protection Framework Policy language Policy enforcement
According to , accountability regards the data stewardship regime in which organizations that are entrusted with personal and business confidential data are responsible and liable for processing, sharing, storing and otherwise using the data according to contractual and legal constraints from the time it is collected until when the data is destroyed (including onward transfers to third parties). Obligations associated to such responsibilities can be expressed in an accountability policy, which is a set of rules that defines the conditions under which an accountable entity must operate.
Today, there is neither an established standard for expressing accountability policies nor a well defined way to enforce these policies. Since cloud services often combine infrastructure, platform and software applications to aggregate value and propose new cloud applications to individuals and organizations, it is fundamental for an accountability policy framework to enable “chains of accountability” across cloud services addressing regulatory, contractual, security and privacy concerns.
Access and Usage Control rules - express which rights should be granted or revoked regarding the use and the distribution of data in cloud infrastructures, and support the definition of roles as specified in the Data Protection Directive, e.g. data controller and data processor.
Capturing privacy preferences and consent - to express user preferences about the usage of their personal data, to whom data can be released, and under which conditions.
Data Retention Periods - to express time constraints about personal data collection.
Controlling Data Location and Transfer - clear whereabouts of location depending on the type of data stored and on the industry sector processing the data (subject to specific regulations) must be provided. Accountability policies for cloud services need to be able to express rules about data localization, such that accountable services can signal where the data centers hosting them are located. Here we consider strong policy binding mechanisms to attach policies to data.
Auditability - Policies must describe the clauses in a way that actions taken upon enforcing the policy can be audited in order to ensure that the policy was adhered to. The accountability policy language must specify which events have to be audited and what information related to the audited event have to be considered.
Reporting and notifications - to allow cloud providers to notify end-users and cloud customers in case of policy violation or incidents for instance.
Redress - express recommendations for redress in the policy in order to set right what was wrong and what made a failure occur.
In this paper we provide a cloud accountability policy representation framework designed while considering the aforementioned requirements. We define an abstract yet readable language, called AAL, for accountability clauses representation in a human readable fashion. We also define a concrete policy enforcement language, called A-PPL, as an extension of the PPL  language. The proposed framework, offers the means for a translation from abstract clauses expressed in AAL to concrete policies in A-PPL.
The rest of this paper is organized as follows. Section 2 describes related work. Section 3 gives an overview on the main components of our policy representation framework. We present the abstract accountability policy language we propose in Sect. 4 and the concrete one in Sect. 5. Section 6 describes a realistic use case as a proof of concept to our work. Section 7 discusses our work and presents directions for future work.
2 Related Work
In the following, we provide an overview of related work in the field. We organize this section along the following categories that relate to our contribution in this paper: accountability in computer science, obligations in legal texts and directives, enforcement and policy languages.
There is a recent interest and active research for accountability which overlap several domains like security [4, 5, 6], language representation [7, 8], auditing systems [9, 10], evidence collection [11, 12] and so on. However, only few of them consider an interdisciplinary view of accountability taking into account legal and business aspects. We particularly emphasize the work from [6, 9] since they provide a general, concrete view and yet an operational approach.
Regarding tool supports and frameworks we can find several proposals [12, 13, 14], but none of them provides a holistic approach for accountability in the cloud, from end-user understandable sentences to concrete machine-readable representations. In , authors propose an end-to-end decentralized accountability framework to keep track of the usage of the data in the cloud. They suggest an object-centered approach that packs the logging mechanism together with users’ data and policies.
2.2 Obligations in Regulations
2.3 Enforcement and Policies
A number of policy languages have been proposed in recent years for machine-readable policy representation. We reviewed several existing policy languages (see  for details). We present here the results of our analysis of existing policy languages. Rather than designing a new policy language for accountability, we leverage a concrete existing language that covers most of the accountability obligations listed in Sect. 1. We aim at extending it with features that enable the expression of accountability policies.
The first step of our analysis is to check to which extent the selected policy languages can express the accountability obligations. None of the languages we reviewed enables the expression of all these obligations. However, they may focus on one or several of the accountability obligations. For example, PPL  can be used to express access and usage control rules, privacy preferences and policies and it enables the notifications. From this first analysis, we classify our selected policy languages into four categories: (i) Access Control: eXtensible Access Control Markup Language (XACML, ); (ii) Privacy: The Platform for Privacy Preferences (P3P, ), the Primelife Policy Language (PPL, ) and SecPal for Privacy (SecPal4P, ); (iii) Policy specification for security: Conspec  and Ponder ; (iv) Service Description: The Unified Service Description Language (USDL, ), SLAng  and WS-Policy [28, 29]. Note that one language can belong to several categories. We also argue that we cannot define from our selected set of languages an additional category Accountability language. In particular, most of the languages do not provide means to express logging, reporting and audit obligations that are essential to support accountability. Therefore, the design of the accountability language we propose in the following sections represents an unprecedented attempt to express accountability obligations via a policy language.
In a second step, we analyze the extensibility of the reviewed languages in order to extend one of the languages with accountability features. We focus on XML-based languages, since XML  provides many extension points to extend the language. In addition, XML is a standard and well documented. Thus adding extension to an XML-based language is fairly simple. Languages such as XACML, P3P and PPL use XML to define policies related to access control and privacy. In our work we consider PPL since it provides elements that capture the best accountability obligations.
3 The Cloud Accountability Framework
In this section, we provide an overview of our proposed policy representation framework. Such framework must allow end-users to easily express their accountability clauses and preferences and even be complete and rigorous enough to be run by a policy execution engine. Hence we are faced with the following dilemma: the policy must be written by an end-user, which does not necessarily have skills in a certain policy language and the policy must be machine understandable at the same time. Machine understandable means that sentences can be read, understood and executed by a computer.
In this context, we propose a policy representation framework (see Fig. 1) that allows a user, step (1) in Fig. 1, to express his accountability needs in a human readable fashion and (2) offers the necessary means to translate them (semi-)automatically3 to a machine understandable format.
Data subject: this role represents any end-user which has data privacy concerns, mainly because he outsourced some of its data to a cloud provider.
Data processor: this role is attributed to any computational agent which processes some personal data. It should act under the control of a data controller.
Data controller: it is legally responsible to the data subject for any violations of its privacy and to the data protection authority in case of misconduct.
Auditor: it represents data protection authorities which are in charge of the application of laws and directives.
3.1 Step (1). Human/Machine Readable Representation
To express accountability clauses we define an Abstract Accountability Language (AAL), which is devoted to expressing accountability clauses in an unambiguous style and which is close to what the end-user needs and understands. As this is the human readable level, this language should be simple, akin to a natural logic, that is a logic expressed in a subset of a natural language.
For instance, a simple access control clause to state that “the data d cannot be read by all agents” will be formulated in a human/machine readable fashion using our accountability language as “DENY ANY:Agent.READ(d:Data)”. Details on the AAL syntax are provided in Sect. 4.
3.2 Step (2). Machine Understandable Representation
In this step (called the mapping), the accountability clauses expressed in AAL are (semi-)automatically translated into a machine understandable policy. We target a policy language that is able to enforce classic security means (like access or usage controls) but also accountability clauses. Such automatic translation may need several passes, due to the high level of abstraction of AAL.
As analyzed in Sect. 2, the PrimeLife Policy Language (PPL)  seems the most convenient language for privacy policies representation. It can be extended to address specific accountability obligations such as auditability, notification or logging obligations. Hence, we propose an extension to PPL, A-PPL for accountable PPL, which supports such obligations. The details of this extension are described in Sect. 5.
4 Abstract Language
We introduce in this section AAL (Abstract Accountability Language), which is devoted to expressing accountability clauses in an unambiguous human readable style. The AAL concepts are presented in Sect. 4.1, its syntax in Sect. 4.2 and we provide an outlook on our approach for a machine understandable representation of AAL policies in Sect. 4.4.
4.1 AAL Concepts
User preferences: expressing the clauses a data subject wants to be satisfied, for instance he does not want its data to be distributed over the network or only used for statistics by a given data processor, and so on.
Processor clauses: these are the clauses the data processor declares to ensure regarding the data management and processing.
Finally, as many policy representation languages, we consider permission, obligation and prohibition in AAL. They occur in various approaches, like in PPL, or in the ENDORSE project4. Permission, obligation and prohibition are respectively expressed in AAL sentences with these keywords: PERMIT, MUST and MUSTNOT/DENY, as advocated by the IETF RFC 2119 .
4.2 AAL Syntax
Further examples of user preferences and clauses expressed in AAL are provided in Sect. 6.
4.3 Semantics and Verification
We define formal semantics for AAL based on a pure temporal logic approach. However, since we need data and agent quantification we precesiley rely on a first-order linear temporal logic . The translation is rather straightforward, except that the audit process introduces a new modality with a specific formal interpretation (see  for details). From this interpretation it is possible to check the consistency of an AAL formula or the compliance of two AAL clauses using a logical prover. This last feature is explicitly suitable to match user preferences and processor clauses. Furthermore, principles for abstract component design have been defined and can be checked with a model-checker. We present these design principles in , how to translate them in the \(\mu \)-calculus and how to verify AAL clauses with the mCRL2 toolset .
4.4 Machine Understandability
(2’.1). First, a temporal logic is used to make more concrete AAL sentences as temporal logic properties. Indeed, in an accountability policy we should represent the notions of permission, obligation and prohibition. In addition, there is a need to express conditions and various logical combinations. Furthermore, one important thing is to have time, at least logical discrete time, for instance to write: “X writes some data and then stores some logs”. Our target is a temporal logic with time, one concrete candidate is mCRL2 .
(2’.2). Second, a policy calculus is used to describe the operational semantics associated to the concrete properties defined in (2’.1). This calculus is based on the concept of reference monitor  for both the agents and the data resources. It relies on a previous work for distributed agent communicating via messages . This operational semantics provides means for abstractly executing the temporal logic expressions. This process is known as “program synthesis”, starting from a property it generates a program ensuring/enforcing the property.
(2’.3). Finally, the generated policy using our policy calculus is (semi-)automatically translated to a machine understandable policy based on predefined transformation rules. Our target is the A-PPL extension of PPL which is described in the next section.
5 Concrete Language
Our accountability policy representation framework maps AAL clauses to concrete and operational machine understandable policies. As already mentioned in Sect. 2.3, in order not to define yet another completely new language to map accountability obligations to machine-understandable policies, we conducted a preliminary study on existing languages and among all the possible candidates, PPL seems the one that best captures the accountability concepts. Therefore, in this section, we present how A-PPL extends PPL to address accountability obligations.
PPL implicitly identifies three roles: the data subject, data controller, third-party data controller and data processor roles. Besides, PPL defines an obligation as a set of triggers and actions. Triggers are events related to the obligation that are filtered by a condition and that trigger the execution of actions. Therefore, PPL defines markups to declare an obligation. Inside the obligation environment, one can specify a set of triggers and their related actions.
5.1 Extension of Roles
To address accountability concerns in a cloud environment, it might be necessary to include in the policy a reference to the role of the subject to which the policy is applied to. For instance, in PPL, it was not possible to identify the data controller. We therefore suggest adding to the PPL <Subject> element a new attribute, attribute:role, for this purpose. Furthermore, in addition to the four roles PPL inherently considers (data subject, data controller, downstream data controller, data processor), A-PPL extends PPL with one additional role. We add the auditor role that is considered as a trusted third-party that can conduct independent assessment of cloud services, information systems operations performance and security of the cloud implementation. This new role is important to catch accountability specific obligations such as auditability, reporting notification and possibly redress.
5.2 Extension of Actions and Triggers
We add to PPL a set of new A-PPL actions and triggers in order to map accountability obligations. We introduce two new triggers that relate to access control. We propose TriggerPersonalDataAccessPermitted and TriggerPersonalDataAccessDenied. They fire actions based on the result of an access decision taken on a piece of data. In other words, if the evaluation of the access control on the targeted data results is Permit, TriggerPersonalDataAccessPermitted may trigger an action specified in the policy. Symmetrically, TriggerPersonalDataAccessDenied triggers actions when an access on a piece of data is denied. We also enhance the action of logging ActionLog and notification ActionNotify that already exist in PPL. For instance, while PPL currently enables notification thanks to the ActionNotifyDataSubject, A-PPL defines a new and more general ActionNotify action in which one can define the recipient of the notification thanks to a newly defined parameter recipient. Moreover, the additional notification type parameter defines the purpose of the notification which can be, for example, policy violation, evidence or redress notification. On the other hand, the current ActionLog action in PPL fails to capture accountability obligations. The new ActionLog action in A-PPL introduces many additional parameters to provide more explicit information on the logged event. For example, timestamp defines the time of the event, and Resource location identifies the resource the action was taken on. We also create two actions related to auditability: ActionAudit that creates an evidence request and ActionEvidenceCollection that collects requested evidence. In addition, auditability requires the definition of two new triggers related to evidence: TriggerOnEvidenceRequestReceived that occurs when an audited receives an evidence request and TriggerOnEvidenceReceived that occurs when an auditor receives the requested evidence. Similarly, when an update occurs in a policy or in a user preference, the update may trigger a set of actions to be performed. Thus, we create two additional triggers: TriggerOnPolicyUpdate and TriggerOnPreferenceUpdate. Finally, to handle complaints that a data subject may file in the context of remediability, we define the trigger TriggerOnComplaint that triggers a set of specific actions to be undertaken by an auditor or/and a data controller.
In this section we validate our policy representation framework by extracting obligations from one of the use cases documented in the A4Cloud public deliverable DB3.1  and illustrate their representation in AAL and A-PPL.
6.1 The Health Care Use Case
In this use case the patients are the data subjects from whom personal data is collected. The hospital is ultimately responsible for the health care services and will hence act as one of the data controllers for the personal data that will be collected. The patients’ relatives may also upload personal data about the patients and can therefore be seen as data controllers (as well as data subjects, when personal data about their usage of the system is collected from them). As can be seen in Fig. 4, the use case will involve cloud services for sensor data collection and processing (denoted cloud provider “X”), cloud services for data storage (denoted cloud provider “Y”) and cloud services for information sharing (denoted cloud provider “M”), which will be operated by a collaboration of different providers. Since the primary service provider M, with whom the users will interface, employs two sub-providers, a chain of service delivery will be created. In this particular case, the M platform provider will be the primary service provider and will act as a data processor with respect to the personal data collected from the patients. Also the sub-providers, X and Y, will act as data processors. The details of the use case are further described in .
6.2 Obligations for the Use Case
We have identified a number of obligations for this use case, which needs to be handled by the accountability policy framework. Here we list three examples and we explain how they will be expressed in AAL and mapped into A-PPL. Note that the complete list of obligations is much longer, but we have chosen to outline those that illustrate the most important relationships between the involved actors. Due to space limitations we do not include the complete A-PPL policies here; the reader is referred to the project documentation  to see the full policy expressions.
Obligation 1: The Data Subject’s Right to Access, Correct and Delete Personal Data. According the Data Protection Directive , data subjects have (among others) the right to access, correct and delete personal data that have been collected about them. In this use case it means that the hospital must allow read and write grant access to patients as well as relatives with regard to their personal data that have been collected and stored in the cloud. There must be also means to enforce the deletion of such data.
The condition D.subject=Patient expresses that this clause only concerns the personal data of the Patient. This clause also expresses the audit and rectification obligations that have to be ensured by an external Auditor.
Using the accountability policy representation framework, the AAL expression will be mapped into two different A-PPL expressions; one for permitting read and write access to the patients and another one for enforcing the data controller to delete the personal data whenever requested. Read and write access control is achieved through XACML rules. Regarding deletion of data, a patient can express data handling preferences that specify the obligation that the data controller has to enforce to delete the personal data. This obligation can be expressed using the A-PPL obligation action ActionDeletePersonalData, which will be used by the patient to delete personal data that has been collected about him.
An explicit audit clause implies that information related to the usage control property are logged (the amount and the nature of this information is not discussed here). Thus the audit clause is translated into an AuditAction which is responsible to manage the interaction with the auditor. This runs an exchange protocol with the auditor which ends with two responses: either no violation of the usage has been detected or a violation exists. In the latter, some rectification clauses should be specified.
In the sequel we only consider usage control clauses since the translation process for audit and rectification is similar to the previous example.
In A-PPL, such notification is expressed through the obligation action ActionNotify. It takes as parameters, the recipient of the notification (here, the data subject) and the type of the notification (here, security breach).
In A-PPL, the obligation trigger TriggerPersonalDataDeleted combined with the obligation action ActionNotify will notify the data subject of the deletion of its data. In addition, if necessary, the obligation action ActionLog will allow the provider M to log when personal data have been deleted.
The three examples that we have provided in this section represent a snapshot of the full power of AAL and A-PPL. In  we outline more examples of obligations for the health care use case, which among other things demonstrate how informed consent can be gathered from the patients before their data is being processed, how the purpose of personal data collection can be specified and controlled, how the data processor M can inform the hospital of the use of sub-processors and how the data processors can facilitate for regulators to review evidence on their data processing practices.
Dealing with personal data in the cloud raises several accountability and privacy issues that must be considered to promote the safety usage of cloud services. In this paper we tackle the issue related to accountability clauses and preferences representation. We propose a cloud accountability policy representation framework. This framework enables accountability policy expression in a human readable fashion using our abstract accountability language (AAL). Also, it offers the means for their mapping to concrete enforcement policies written using our accountability policy language (A-PPL). Our framework applies the separation of concerns principle by separating the abstract language from the concrete one. This choice makes both contributions, i.e. AAL and A-PPL, self-contained and allows their independent use. The ability of our framework to represent accountability clauses/preferences was validated through a realistic use case.
Our future research work will focus on the mapping from AAL to A-PPL. As part of our implementation perspectives, we are currently working on two prototypes. An AAL editor that assists end-users in writing their preferences/clauses and implements the required artifacts to map them to concrete policies in A-PPL. We also started the development of an A-PPL policy execution engine that will be in charge of interpreting and matching A-PPL policies and preferences
This work was funded by the EU’s 7th framework A4Cloud project.
- 1.Pearson, S., Tountopoulos, V., Catteddu, D., Südholt, M., Molva, R., Reich, C., Fischer-Hübner, S., Millard, C., Lotz, V., Jaatun, M.G., Leenes, R., Rong, C., Lopez, J.: Accountability for cloud and other future internet services. In: CloudCom, pp. 629–632. IEEE (2012)Google Scholar
- 2.Directive, E.U.: Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data (1995). http://ec.europa.eu/justice/policies/privacy/docs/95--46-ce/dir1995-46_part1_en.pdf
- 3.Ardagna, C.A., Bussard, L., De Capitani Di Vimercati, S., Neven, G., Paraboschi, S., Pedrini, E., Preiss, S., Raggett, D., Samarati, P., Trabelsi, S., Verdicchio, M.: Primelife policy language (2009). http://www.w3.org/2009/policy-ws/papers/Trabelisi.pdf
- 8.DeYoung, H., Garg, D., Jia, L., Kaynar, D., Datta, A.: Experiences in the logical specification of the HIPAA and GLBA privacy laws. In: 9th Annual ACM Workshop on Privacy in the Electronic Society (WPES 2010), pp. 73–82 (2010)Google Scholar
- 9.Feigenbaum, J., Jaggard, A.D., Wright, R.N., Xiao, H.: Systematizing “accountability” in computer science. Technical report YALEU/DCS/TR-1452, University of Yale (2012)Google Scholar
- 12.Haeberlen, A., Aditya, P., Rodrigues, R., Druschel, P.: Accountable virtual machines. In: 9th USENIX Symposium on Operating Systems Design and Implementation, OSDI, pp. 119–134 (2010)Google Scholar
- 13.Wei, W., Du, J., Yu, T., Gu, X.: Securemr: a service integrity assurance framework for mapreduce. In: Proceedings of the 2009 Annual Computer Security Applications Conference, pp. 73–82. IEEE Computer Society, Washington, DC (2009)Google Scholar
- 14.Zou, J., Wang, Y., Lin, K.J.: A formal service contract model for accountable SaaS and cloud services. In: International Conference on Services Computing, pp. 73–80. IEEE (2010)Google Scholar
- 15.US Congress: Health insurance portability and accountability act of 1996, privacy rule. 45 cfr 164 (2002). http://www.access.gpo.gov/nara/cfr/waisidx_07/45cfr164_07.html
- 16.Legislative Assembly of Ontario: Freedom of information and protection of privacy act (r.s.o. 1990, c. f.31) (1988)Google Scholar
- 17.Breaux, T.D., Anton, A.I.: Deriving semantic models from privacy policies. In: Sixth IEEE International Workshop on Policies for Distributed Systems and Networks (POLICY 2005), pp. 67–76 (2005)Google Scholar
- 18.Kerrigan, S., Law, K.H.: Logic-based regulation compliance-assistance. In: International Conference on Artificial Intelligence and Law, pp. 126–135 (2003)Google Scholar
- 19.US Congress: Gramm-leach-bliley act, financial privacy rule. 15 usc 6801–6809 (1999). http://www.law.cornell.edu/uscode/usc_sup_01_15_10_94_20_I.html
- 20.Garaga, A., de Oliveira, A.S., Sendor, J., Azraoui, M., Elkhiyaoui, K., Molva, R., Önen, M., Cherrueau, R.A., Douence, R., Grall, H., Royer, J.C., Sellami, M., Südholt, M., Bernsmed, K.: Policy Representation Framework. Technical report D:C-4.1, Accountability for Cloud and Future Internet Services - A4Cloud Project (2013). http://www.a4cloud.eu/sites/default/files/D34.1%20Policy%20representation%20Framework.pdf
- 21.OASIS Standard: eXtensible Access Control Markup Language (XACML) Version 3.0. 22, January 2013. http://docs.oasis-open.org/xacml/3.0/xacml-3.0-core-spec-os-en.html
- 22.Marchiori, M.: The platform for privacy preferences 1.0 (P3P1.0) specification. W3C recommendation, W3C (2002). http://www.w3.org/ TR/ 2002/ REC-P3P-20020416/
- 23.Becker, M.Y., Malkis, A., Bussard, L.: S4p: A generic language for specifying privacy preferences and policies. Technical report MSR-TR-2010-32, Microsoft Research (2010)Google Scholar
- 27.Lamanna, D.D., Skene, J., Emmerich, W.: SLAng: a language for defining service level agreements. In: Proceedings of the The Ninth IEEE Workshop on Future Trends of Distributed Computing Systems, pp. 100–106. IEEE Computer Society, Washington, DC (2003)Google Scholar
- 28.OASIS Web Service Security (WSS) TC: Web Services Security: SOAP Message Security 1.1 (2006). https://www.oasis-open.org/committees/download.php/16790/wss-v1.1-spec-os-SOAPMessageSecurity.pdf
- 29.OASIS Web Services Secure Exchange (WS-SX) TC: WS-Trust 1.4 (2012). http://docs.oasis-open.org/ws-sx/ws-trust/v1.4/errata01/os/ws-trust-1.4-errata01-os-complete.html
- 30.Bray, T., Paoli, J., Sperberg-McQueen, C.M., Maler, E., Yergeau, F.: Extensible markup language (XML). World Wide Web J. 2, 27–66 (1997)Google Scholar
- 31.Butin, D., Chicote, M., Le Métayer, D.: Log design for accountability. In: IEEE CS Security and Privacy Workshops (SPW), pp. 1–7 (2013)Google Scholar
- 32.Henze, M., Großfengels, M., Koprowski, M., Wehrle, K.: Towards data handling requirements-aware cloud computing. In: 2013 IEEE International Conference on Cloud Computing Technology and Science (CloudCom) (2013)Google Scholar
- 33.Bradner, S.: IETF RFC 2119: Key words for use in RFCs to Indicate Requirement Levels. Technical report (1997)Google Scholar
- 36.Benghabrit, W., Grall, H., Royer, J.-C., Sellami, M., Bernsmed, K., De Oliveira, A.S.: Abstract accountability language. In: Zhou, J., Gal-Oz, N., Zhang, J., Gudes, E. (eds.) IFIPTM 2014. IFIP AICT, vol. 430, pp. 229–236. Springer, Heidelberg (2014) Google Scholar
- 37.Benghabrit, W., Grall, H., Royer, J.C., Sellami, M.: Accountability for abstract component design. In: 40th EUROMICRO Conference on Software Engineering and Advanced Applications, SEAA, Verona, Italia (2014)Google Scholar
- 38.Cranen, S., Groote, J.F., Keiren, J.J.A., Stappers, F.P.M., de Vink, E.P., Wesselink, W., Willemse, T.A.C.: An overview of the mCRL2 toolset and its recent advances. In: Piterman, N., Smolka, S.A. (eds.) TACAS 2013 (ETAPS 2013). LNCS, vol. 7795, pp. 199–213. Springer, Heidelberg (2013) CrossRefGoogle Scholar
- 40.Allam, D., Douence, R., Grall, H., Royer, J.C., Südholt, M.: Well-Typed Services Cannot Go Wrong. Rapport de recherche RR-7899, INRIA (2012)Google Scholar
- 41.Bernsmed, K., Felici, M., Oliveira, A.S.D., Sendor, J., Moe, N.B., Rübsamen, T., Tountopoulos, V., Hasnain, B.: Use case descriptions. Deliverable, Cloud Accountability (A4Cloud) Project (2013)Google Scholar