# Modelling and Verifying Contract-Oriented Systems in Maude

## Abstract

We address the problem of modelling and verifying contract-oriented systems, wherein distributed agents may advertise and stipulate contracts, but — differently from most other approaches to distributed agents — are not assumed to always behave “honestly”. We describe an executable specification in Maude of the semantics of CO_{2}, a calculus for contract-oriented systems [6]. The *honesty* property [5] characterises those agents which always respect their contracts, in *all* possible execution contexts. Since there is an infinite number of such contexts, honesty cannot be directly verified by model-checking the state space of an agent (indeed, honesty is an undecidable property in general [5]). The main contribution of this paper is a sound verification technique for honesty. To do that, we safely over-approximate the honesty property by abstracting from the actual contexts a process may be engaged with. Then, we develop a model-checking technique for this abstraction, we describe an implementation in Maude, and we discuss some experiments with it.

## Keywords

Latent Contract Abstract Semantic Bilateral Contract Execution Context Executable Specification## 1 Introduction

Contract-oriented
computing is a software design paradigm where the interaction between clients and services is disciplined through contracts [4, 6]. Contract-oriented services start their life-cycle by advertising contracts which specify their required and offered behaviour. When compliant contracts are found, a session is created among the respective services, which may then start interacting to fulfil their contracts. Differently from other design paradigms (e.g. those based on the session types discipline [10]), services are not assumed to be *honest*, in that they might not respect the promises made [5]. This may happen either unintentionally (because of errors in the service specification), or because of malicious behaviour.

Dishonest behaviour is assumed to be automatically detected and sanctioned by the service infrastructure. This gives rise to a new kind of attacks, that exploit possible discrepancies between the promised and the actual behaviour. If a service does not behave as promised, an attacker can induce it to a situation where the service is sanctioned, while the attacker is reckoned honest. A crucial problem is then how to avoid that a service results definitively culpable of a contract violation, despite of the honest intentions of its developer.

In this paper we present an executable specification in Maude [9] of CO_{2}, a calculus for contract-oriented computing [4]. Furthermore, we devise and implement a sound verification technique for honesty. We start in Sect. 2 by introducing a new model for contracts. Borrowing from other approaches to behavioural contracts [5, 8], ours are bilateral contracts featuring internal/external choices, and recursion. We define and implement in Maude two crucial primives on contracts, i.e. *compliance* and *culpability testing*, and we study some relevant properties.

In Sect. 3 we present CO_{2} (instantiated with the contracts above), and an executable specification of its semantics in Maude. In Sect. 4 we formalise a weak notion of honesty, i.e. when a process \(P\) is honest *in a given context*, and we implement and experiment with it through the Maude model checker.

The main technical results follow in Sect. 5, where we deal with the problem of checking honesty in *all* possible contexts. To do that, we start by defining an abstract semantics of CO_{2}, which preserves the transitions of a participant \({\mathsf {A}} [{P}] \), while abstracting those of the context wherein \({\mathsf {A}} [{P}] \) is run. Building upon the abstract semantics, we then devise an abstract notion of honesty (\(\alpha \) *-honesty*, Definition 11), which neglects the execution context. Theorem 5 states that \(\alpha \)-honesty correctly approximates honesty, and that — under certain hypotheses — it is also complete. We then propose a verification technique for \(\alpha \)-honesty, and we provide an implementation in Maude. Some experiments have then been carried out; quite notably, our tool has allowed us to determine the dishonesty of a supposedly-honest CO_{2} process appeared in [5] (see Example 5).

Because of space limits, we make available online the proofs of all our statements, as well as the Maude implementation, and the experiments made [2].

## 2 Modelling Contracts

We model contracts as processes in a simple algebra, with internal/external choice and recursion. Compliance between contracts ensures progress, until a successful state is reached. We prove that our model enjoys some relevant properties. First, in each non-final state of a contract there is exactly one participant who is *culpable*, i.e., expected to make the next move (Theorem 1). Furthermore, a participant always recovers from culpability in at most two steps (Theorem 2).

*Syntax.* We assume a finite set of *participant names* (ranged over by \({\mathsf {A}}, {\mathsf {B}}, \ldots \)) and a denumerable set of *atoms* (ranged over by \(\mathsf a , \mathsf b , \ldots \)). We postulate an involution \({\mathrm{co }}\!\left( \mathsf{a }\right) \), also written as \(\bar{\mathsf{a }}\), extended to sets of atoms in the natural way. Definition 1 introduces the syntax of contracts. We distinguish between (*unilateral*) contracts \(c\), which model the promised behaviour of a single participant, and *bilateral* contracts \(\gamma \), which combine the contracts advertised by two participants.

### **Definition 1**

*Unilateral contracts*are defined by the following grammar:where \((i)\) the index set \(\mathcal {I}\) is finite; \((ii)\) the “\( ready \; {}\)” prefix may appear at the top-level, only; \((iii)\) recursion is guarded.

*Bilateral contracts* \(\gamma \) are terms of the form \({\mathsf {A}} \; says \;{c} \mid {\mathsf {B}} \; says \;{d}\), where \(\mathsf {A} \!\ne \! \mathsf {B}\) and at most one occurrence of “\( ready \; {}\!\!\!\)” is present. The order of unilateral contracts in \(\gamma \) is immaterial, i.e. \({\mathsf {A}} \; says \;{c} \mid {\mathsf {B}} \; says \;{d} \;\equiv \; \mathsf {B} \; says \;d \mid \mathsf {A} \; says \;c\).

An internal sum \(\bigoplus _{i \in \mathcal {I}} \mathsf{a _i} \, ; \, {c_i}\) allows to choose one of the branches \(\mathsf{a _i} \, ; \, {c_i}\), to perform the action \(\mathsf a _i\), and then to behave according to \(c_i\). Dually, an external sum \(\sum _{i \in \mathcal {I}} \mathsf{a _i} \, . \, {c_i}\) allows to wait for the other participant to choose one of the branches \(\mathsf{a _i} \, . \, {c_i}\), then to perform the corresponding \(\mathsf a _i\) and behave according to \(c_i\). Separators \(;\) and \(.\) allow for distinguishing singleton internal sums \(\mathsf{a } \, ; \, {c}\) from singleton external sums \(\mathsf{a } \, . \, {c}\). Empty internal/external sums are denoted with \(0\). We will only consider contracts without free occurrences of recursion variables \(X\).

### *Example 1*

The operator - models the involution on atoms, with eq - - a:Atom = a:Atom. The other operators are rather standard, and they guarantee that each UniContract respects the syntactic constraints imposed by Definition 1.

*Semantics.*The evolution of bilateral contracts is modelled by \(\mathrel {\xrightarrow {\mu }\rightarrow }\), the smallest relation closed under the rules in Fig. 1 and under \(\equiv \). The congruence \(\equiv \) is the least relation including \(\alpha \)-conversion of recursion variables, and satisfying Open image in new window and \(\bigoplus _{i \in \emptyset } \mathsf{a _i} \, ; \, {c_i} \equiv \sum _{i \in \emptyset } \mathsf{a _i} \, . \, {c_i}\). The label \(\mu = {\mathsf {A}} \; says \;\mathsf a \) models \(\mathsf {A}\) performing action a. Hereafter, we shall consider contracts up-to \(\equiv \).

In rule [IntExt], participant \(\mathsf {A}\) selects the branch \(\mathsf a \) in an internal sum, and \(\mathsf {B}\) is then forced to commit to the corresponding branch \(\bar{\mathsf{a }}\) in his external sum. This is done by marking that branch with \( ready \; {\bar{\mathsf{a }}}\), while discarding all the other branches; \(\mathsf {B}\) will then perform his action in the subsequent step, by rule [Rdy].

*Compliance.* Two contracts are *compliant* if, whenever a participant \(\mathsf {A}\) wants to choose a branch in an internal sum, then participant \(\mathsf {B}\) always offers \(\mathsf {A}\) the opportunity to do it. To formalise compliance, we first define a partial function \(\mathrm{rdy }\) from bilateral contracts to sets of atoms. Intuitively, if the unilateral contracts in \(\gamma \) do not agree on the first step, then \({\mathrm{rdy }}\!\left( {\gamma }\right) \) is undefined (i.e. equal to \(\bot \)). Otherwise, \({\mathrm{rdy }}\!\left( {\gamma }\right) \) contains the atoms which could be fired in the first step.

### **Definition 2**

**(Compliance).**Let the partial function \(\mathrm{rdy }\) be defined as:Then, the compliance relation \(\bowtie \) between unilateral contracts is the largest relation such that, whenever \(c \bowtie d\):

- (1)
\( {\mathrm{rdy }}\!\left( {{\mathsf {A}} \; says \;{c} \mid {\mathsf {B}} \; says \;{d}}\right) \ne \bot \)

- (2)
\( {\mathsf {A}} \; says \;{c} \mid {\mathsf {B}} \; says \;{d} \mathrel {\xrightarrow {\mu }\rightarrow } {\mathsf {A}} \; says \;{c'} \mid {\mathsf {B}} \; says \;{d'}\implies c' \bowtie d' \)

### *Example 2*

Let \(\gamma = {\mathsf {A}} \; says \;{c} \mid {\mathsf {B}} \; says \;{d}\), where \(c = \mathsf{a } \, ; \, {c_1} \oplus \mathsf{b } \, ; \, {c_2}\) and \(d = {\bar{\mathsf{a }}} \, . \, {d_1} +{\bar{\mathsf{c }}} \, . \, {d_2}\). If the participant \(\mathsf {A}\) internally chooses to perform a, then \(\gamma \) will take a transition to \( {\mathsf {A}} \; says \;c_1 \mid {\mathsf {B}} \; says \; ready \; {\bar{\mathsf{a }}}.d_1 \). Suppose instead that \(\mathsf {A}\) chooses to perform b, which is not offered by \(\mathsf {B}\) in his external choice. In this case, \(\gamma \not \mathrel {\xrightarrow {\mathsf {A} \; says \;\mathsf b }\rightarrow }\). We have that \({\mathrm{rdy }}\!\left( {\gamma }\right) = \bot \), which does not respect item \((1)\) of Definition 2. Therefore, \(c\) and \(d\) are *not* compliant.

We say that a contract is *proper* if the prefixes of each summation are pairwise distinct. The next lemma states that each proper contract has a compliant one.

### **Lemma 1**

For all proper contracts \(c\), there exists \(d\) such that \(c \bowtie d\).

Definition 2 cannot be directly exploited as an algorithm for checking compliance. Lemma 2 gives an alternative, model-checkable characterisation of \(\bowtie \) .

### **Lemma 2**

### *Example 3*

*not*compliant:

*Culpability.* We now tackle the problem of determining who is expected to make the next step for the fulfilment of a bilateral contract. We call a participant \(\mathsf {A}\) *culpable* in \(\gamma \) if she is expected to perform some actions so to make \(\gamma \) progress.

### **Definition 3**

A participant \(\mathsf {A}\) is culpable in \(\gamma \) ( Open image in new window in symbols) iff \( \gamma \mathrel {\xrightarrow {{\mathsf {A}} \; says \;\mathsf a }\rightarrow } \) for some \(\mathsf a \). When \(\mathsf {A}\) is *not* culpable in \(\gamma \) we write Open image in new window .

Theorem 1 below establishes that, when starting with compliant contracts, exactly one participant is culpable in a bilateral contract. The only exception is \({\mathsf {A}} \; says \;{0} \mid {\mathsf {B}} \; says \;{0}\), which represents a successfully terminated interaction, where nobody is culpable.

### **Theorem 1**

Let \(\gamma = {\mathsf {A}} \; says \;{c} \mid {\mathsf {B}} \; says \;{d}\), with \(c \bowtie d\). If \(\gamma \mathrel {\xrightarrow {}\rightarrow }^* \gamma '\), then either \(\gamma ' = {\mathsf {A}} \; says \;{0} \mid {\mathsf {B}} \; says \;{0}\), or there exists a unique culpable in \(\gamma '\).

The following theorem states that a participant is always able to recover from culpability by performing some of her duties. This requires at most two steps.

### **Theorem 2**

**(Contractual exculpation).**Let \(\gamma = {\mathsf {A}} \; says \;{c} \mid {\mathsf {B}} \; says \;{d}\). For all \(\gamma '\) such that \(\gamma \mathrel {\xrightarrow {}\rightarrow }^* \gamma '\), we have that:

Item (1) of Theorem 2 says that, in a stuck contract, no participant is culpable. Item (2) says that if \(\mathsf {A}\) is culpable, then she can always exculpate herself in *at most* two steps, i.e.: one step if \(\mathsf {A}\) has an internal choice, or a \( ready \; {}\!\) followed by an external choice; two steps if \(\mathsf {A}\) has a \( ready \; {}\!\) followed by an internal choice.

## 3 Modelling Contracting Processes

We model agents and systems through the process calculus CO_{2} [3], which we instantiate with the contracts introduced in Sect. 2. The primitives of CO_{2} allow agents to advertise contracts, to open sessions between agents with compliant contracts, to execute them by performing some actions, and to query contracts.

*Syntax.* Let \(\mathcal V\) and \(\mathcal {N}\) be disjoint sets of *session variables* (ranged over by \(x,y,\ldots \)) and *session names* (ranged over by \(s,t,\ldots \)). Let \(u,v,\ldots \) range over \(\mathcal V\cup \mathcal {N}\), and \(\varvec{u},\varvec{v}\) range over \(2^{\mathcal V\cup \mathcal {N}}\).

### **Definition 4**

_{2}is given as follows:

Systems are the parallel composition of *participants* \({\mathsf {A}} [{P}] \), *delimited systems* \((u)S\), *sessions* \({s} [{\gamma }] \) and *latent contracts* \(\{\downarrow _{u\!\!}{c}\}_{\mathsf {A}}\). A latent contract \(\{\downarrow _{x\!\!}{c}\}_{\mathsf {A}}\) represents a contract \(c\) (advertised by \(\mathsf {A}\)) which has not been stipulated yet; upon stipulation, the variable \(x\) will be instantiated to a fresh session name. We assume that, in a system of the form \((\varvec{u})({\mathsf {A}} [{P}] \mid {\mathsf {B}} [{Q}] ) \mid \cdots )\), \(\mathsf {A} \ne \mathsf {B}\). We denote with \(\mathsf {K}\) a special participant name (playing the role of contract broker) such that, in each system \((\varvec{u})({\mathsf {A}} [{P}] \mid \cdots )\), \(\mathsf {A} \ne \mathsf {K}\). We allow for prefix-guarded finite sums of processes, and write \(\pi _1.P_1 + \pi _2.P_2\) for \(\sum _{i \in \{1,2\}}\pi _i.P_i\), and \(\mathbf {0}\) for \(\sum _{\emptyset }P\). Recursion is allowed only for processes; we stipulate that each process identifier \(X\) has a unique defining equation \(X(x_1, \ldots , x_j) \;\mathop {=}\limits ^\mathrm{\tiny def }\;P\) such that \(\mathrm {fv}(P) \subseteq \{x_1,\ldots ,x_j\} \subseteq \mathcal V\), and each occurrence of process identifiers in \(P\) is prefix-guarded. We will sometimes omit the arguments of \(X(\varvec{u})\) when they are clear from the context.

Prefixes include silent action \(\tau \), contract advertisement \(\mathsf {tell}_{}\,{\downarrow _{u}{c}}\), action execution \(\mathsf {do}_{u}\,\mathsf{a }\), and contract query \(\mathsf {ask}_{{u}}\,{\phi }\) (where \(\phi \) is an LTL formula on \(\gamma \)). In each prefix \(\pi \ne \tau \), \(u\) refers to the target session involved in the execution of \(\pi \).

_{2}almost literally. Here we just show the sorts used; see [2] for the full details.

The sort SessionIde is a super sort of both SessionVariable and SessionName. Session variables can be of sort Qid; session names can not. Sort IdeVec models sets of SessionIde (used as syntactic sugar for delimitations), while ParamList models vectors of SessionIde (used for parameters of defining equations).

*Semantics.*The CO

_{2}semantics is formalised by the relation \(\xrightarrow {\mu }\) in Fig. 3, where \(\mu \in \left\{ {\mathsf {A}:\! \pi \!} \,\mid \, {\!\mathsf {A} \!\ne \! \mathsf {K}}\right\} \cup \{\mathsf {K}:\! \mathsf {fuse}\}\). We will consider processes and systems up-to the congruence relation \(\equiv \) in Fig. 2. The axioms for \(\equiv \) are fairly standard — except the last one: it collects garbage terms possibly arising from variable substitutions.

Rule [Tau] just fires a \(\tau \) prefix. Rule [Tell] advertises a latent contract \(\{\downarrow _{x\!}{c}\}_{\mathsf {A}}\). Rule [Fuse] finds *agreements* among the latent contracts: it happens when there exist \(\{\downarrow _{x\!\!}{c}\}_{\mathsf {A}}\) and \(\{\downarrow _{y\!\!}{d}\}_{\mathsf {B}}\) such that \(\mathsf {A} \!\ne \! \mathsf {B}\) and \(c \!\bowtie \! d\). Once the agreement is reached, a fresh session containing \(\gamma = {\mathsf {A}} \; says \;{c} \mid {\mathsf {B}} \; says \;{d}\) is created. Rule [Do] allows a participant \(\mathsf {A}\) to perform an action in the session \(s\) containing \(\gamma \) (which, accordingly, evolves to \(\gamma '\)). Rule [Ask] allows \(\mathsf {A}\) to proceed only if the contract \(\gamma \) at session \(s\) satisfies the property \(\phi \). The last three rules are mostly standard. In rule [\({\textsc {Del}}\)] the label \(\pi \) fired in the premise becomes \(\tau \) in the consequence, when \(\pi \) contains the delimited name/variable. This transformation is defined by the function \(\mathrm {del}_{u}({\pi })\), where the set \(\mathrm {fnv}(\pi )\) contains the free names/variables in \(\pi \). For instance, \( (x) \, {\mathsf {A}} [{\mathsf {tell}_{}\,{\downarrow _{x}{c}}.P}] \xrightarrow {\mathsf {A}:\ {\tau }} (x) \, ({\mathsf {A}} [{P}] \,\mid \, \{\downarrow _{x\!}{c}\}_{\mathsf {A}}) \). Here, it would make little sense to have the label \(\mathsf {A\!}:\ {\!\!\!\mathsf {tell}_{}\,{\downarrow _{x\!}{c}}}\), as \(x\) (being delimited) may be \(\alpha \)-converted.

_{2}is almost straightforward [19]; here we show only the main rules (see [2] for the others). Rule [Do] uses the transition relation Open image in new window on bilateral contracts. Rule [Ask] exploits the Maude model checker to verify if the bilateral contract g satisfies the LTL formula phi. Rule [Fuse] uses the operator |X| to check compliance between the contracts c and d, then creates the session s[A says c | B says d] (with s fresh), and finally applies the substitution { s / x } { s / y } (delimitations are dealt with as in Fig. 3).

## 4 Honesty

A remarkable feature of CO_{2} is that it allows for writing *dishonest* agents which do not keep their promises. Intuitively, a participant is honest if she always fulfils her contractual obligations, in all possible contexts. Below we formalise the notion of honesty, by slightly adapting the one appeared in [3]. Then, we show how we verify in Maude a weaker notion, i.e. honesty *in a given context*.

We start by defining the set \({{\mathrm{O }}^{\mathsf {A}}_{s}}({S})\) of *obligations* of \(\mathsf {A}\) at \(s\) in \(S\). Whenever \(\mathsf {A}\) is culpable at some session \(s\), she has to fire one of the actions in \({{\mathrm{O }}^{\mathsf {A}}_{s}}({S})\).

### **Definition 5**

*culpable at*\(s\)

*in*\(S\) iff \({{\mathrm{O }}^{\mathsf {A}}_{s}}({S}) \ne \emptyset \).

The set of atoms \({{\mathrm{RD }}^{\mathsf {A}}_{s}}({S})\) (“Ready Do”) defined below comprises all the actions that \(\mathsf {A}\) can perform at \(s\) in one computation step within \(S\) (note that, by rule [Del], if \(s\) is a bound name then \({{\mathrm{RD }}^{\mathsf {A}}_{s}}({S}) = \emptyset \)). The set \({{\mathrm{WRD }}^{A}_{s}}({S})\) (“Weak Ready Do”) contains all the actions that \(\mathsf {A}\) may possibly perform at \(s\) after a finite sequence of transitions of \(\mathsf {A}\) not involving any \(\mathsf {do}_{}\,{}\!\) at \(s\).

### **Definition 6**

A participant is *ready* if she can fulfil some of her obligations. To check if \(\mathsf {A}\) is ready in \(S\), we consider all the sessions \(s\) in \(S\) involving \(\mathsf {A}\). For each of them, we check that some obligations of \(\mathsf {A}\) at \(s\) are exposed after some steps of \(\mathsf {A}\) *not* preceded by other \(\mathsf {do}_{s}\,{\!}\) of \(\mathsf {A}\). \({\mathsf {A}} [{P}] \) is honest *in a given system* \(S\) when \(\mathsf {A}\) is ready in all evolutions of \({\mathsf {A}} [{P}] \mid S\). Then, \({\mathsf {A}} [{P}] \) is honest when she is honest in *all* \(S\).

### **Definition 7**

**(Honesty).**We say that:

- 1.
\(S\)

*is*\(\mathsf {A}\)*-free*iff it has no latent/stipulated contracts of \(\mathsf {A}\), nor processes of \(\mathsf {A}\) - 2.
\(\mathsf {A}\) is

*ready in*\(S\) iff \( \; S \equiv (\varvec{u}) S' \;\wedge \; {{\mathrm{O }}^{\mathsf {A}}_{s}}({S'}) \ne \emptyset \;\implies {{\mathrm{WRD }}^{\mathsf {A}}_{s}}({S'}) \cap {{\mathrm{O }}^{\mathsf {A}}_{s}}({S'}) \ne \emptyset \) - 3.
\(P\) is

*honest in*\(S\) iff \(\forall {\mathsf {A}} : \left( S \text { is}~{\mathsf {A}}~{\text {-free}} \,\wedge \, {\mathsf {A}} [{P}] \mid S \xrightarrow {}^*S' \right) \implies {\mathsf {A}} \text { is ready in S'}\) - 4.
\(P\) is

*honest*iff, for all \(S\), \(P\) is honest in \(S\)

We have implemented items 2 and 3 of the above definition in Maude (item 4 is dealt with in the next section). CO_{2} can simulate Turing machines [5], hence reachability in CO_{2} is undecidable, and consequently \(\mathrm{WRD }\), readiness and honesty are undecidable as well. To recover decidability, we then restrict to finite state processes: roughly, these are the processes with neither delimitations nor parallel compositions under process definitions.

_{2}(see Definition 10), which blocks all \(\mathtt{{do}}\) at \(\mathtt{{s}}\). This is done by the operator Open image in new window . Then, we look for reachable systems S1 where A can fire a do at s. If the search succeeds, ready? returns true. Note that if A has no obligations at s in S, ready? returns false — uncoherently with Definition 7. To correctly check readiness, we define the function ready (see [2]), which invokes ready? only when \({{\mathrm{O }}^{\mathtt{{A}}}_{\mathtt{{s}}}}({\mathtt{{S}}}) \ne \emptyset \).

_{2}semantics.

### *Example 4*

Even though we conjecture that P is honest (in all contexts), we anticipate here that the verification technique proposed in Sect. 5 does not classify P as honest. This is because the analysis is (correct but) not complete in the presence of ask: indeed, the precise behaviour of an ask is lost by the analysis, because it abstracts from the contracts of the context.

## 5 Model Checking Honesty

We now address the problem of automatically verifying honesty. As mentioned in Sect. 1, this is a desirable goal, because it alerts system designers before they deploy services which could violate contracts at run-time (so possibly incurring in sanctions). Since honesty is undecidable in general [5], our goal is a verification technique which safely over-approximates honesty, i.e. it never classifies a process as honest when it is not. The first issue is that Definition 7 requires readiness to be preserved in all possible contexts, and there is an *infinite* number of such contexts. To overcome this problem, we present below an *abstract* semantics of CO_{2} which preserves the honesty property, while neglecting the actual context where the process \({\mathsf {A}} [{P}] \) is executed.

The definition of the abstract semantics of CO_{2} is obtained in two steps. First, we provide the projections from concrete contracts/systems to the abstract ones. Then, we define the semantics of abstract contracts and systems, and we relate the abstract semantics with the concrete one. The abstraction is always parameterised in the participant \(\mathsf {A}\) the honesty of which is under consideration.

The abstraction \(\alpha _{\mathsf {A}} (\gamma )\) of a bilateral contract \(\gamma ={\mathsf {A}} \; says \;{c} \mid {\mathsf {B}} \; says \;{d}\) (Definition 8 below) is either \(c\), or \( ctx .c\) when \(d\) has a \( ready \; {}\!\).

### **Definition 8**

We now define the abstraction \(\alpha _{\mathsf {A}}\) of concrete systems, which just discards all the components not involving \(\mathsf {A}\), and projects the contracts involving \(\mathsf {A}\).

### **Definition 9**

*abstract system*\(\alpha _{\mathsf {A}} (S)\) as:

*Abstract semantics.* For all participants \(\mathsf {A}\), the abstract LTSs \(\mathrel {\mathrel {\xrightarrow {\ell }\rightarrow }_{\mathsf {A}}}\) and \(\mathrel {{\xrightarrow {\mu }_{\mathsf {A}}^{}}}\) on abstract contracts and systems, respectively, are defined by the rules in Fig. 4. Labels \(\ell \) are atoms, with or without the special prefix \( ctx \) — which indicates a contractual action performed by the context. Labels \(\mu \) are either \( ctx \) or they have the form \(\mathsf {A\!}:\ {\!\pi }\), where \(\mathsf {A}\) is the participant in \(\mathrel {{\xrightarrow {}_{\mathsf {A}}^{}}}\), and \(\pi \) is a CO_{2} prefix.

To check if \({\mathsf {A}} [{P}] \) is honest, we must only consider those \(\mathsf {A}\)-free contexts not already containing advertised/stipulated contracts of \(\mathsf {A}\). Such systems will always evolve to a system which can be split in two parts: an \(\mathsf {A}\) *-solo* system \(S_{\mathsf {A}}\) containing the process of \(\mathsf {A}\), the contracts advertised by \(\mathsf {A}\) and all the sessions containing contracts of \(\mathsf {A}\), and an \(\mathsf {A}\) *-free* system \(S_{ctx}\).

### **Definition 10**

*-safe*iff \(S \equiv (\varvec{s}) (S_{\mathsf {A}} \mid S_{ ctx })\), with \(S_{\mathsf {A}}\ {\mathsf {A}}\)-solo and \(S_{ ctx }\ \mathsf {A}\)-free.

The following theorems establish the relations between the concrete and the abstract semantics of CO_{2}. Theorem 3 states that the abstraction is *correct*, i.e. for each concrete computation there exists a corresponding abstract computation. Theorem 4 states that the abstraction is also *complete*, provided that a process has neither \(\mathsf {ask}_{{}}\,{}\!\) nor non-proper contracts.

### **Theorem 3**

### **Theorem 4**

The abstract counterparts of Ready Do, Weak Ready Do, and readiness are defined as expected, by using the abstract semantics instead of the concrete one (see [2] for details). The notion of honesty for abstract systems, namely \(\alpha \) *-honesty*, follows the lines of that of honesty in Definition 7.

### **Definition 11**

**(** \(\alpha \) **-honesty).** We say that \(P\) is \(\alpha \) *-honest* iff for all \(\tilde{S}\) such that \({\mathsf {A}} [{P}] \mathrel {{\xrightarrow {}_{\mathsf {A}}^{}}}^{\!\! *} \tilde{S}\), \(\mathsf {A}\) is ready in \(\tilde{S}\).

The main result of this paper follows. It states that \(\alpha \)-honesty is a sound approximation of honesty, and — under certain conditions — it is also complete.

### **Theorem 5**

If \(P\) is \(\alpha \)-honest, then \(P\) is honest. Conversely, if \(P\) is honest, \(\mathsf {ask}_{{}}\,{}\!\)-free, and has proper contracts only, then \(P\) is \(\alpha \)-honest.

Honesty is checked by searching for states such that \(\mathsf {A}\) is *not* ready. If the search fails, then A is honest. As in Sect. 4, this function is decidable for finite state processes, i.e. those without delimitation/parallel under process definitions. The following example shows a process which was erroneously classified as honest in [5]. The Maude model checker has determined the dishonesty of that process, and by exploiting the Maude tracing facilities we managed to fix it.

### *Example 5*

_{2}process for A as follows: Variables x and y in P correspond to two separate sessions, where A respectively interacts with B and V. The advertisement of CV causally depends on the stipulation of the contract CB, because A must fire clickVoucher before tell y CV. In process Q the store waits for the answer of V: if V validates the voucher (first branch), then A accepts it from B; otherwise (second branch), A requires B to pay. The third branch R allows A to fire a \(\tau \) action, and then reject the voucher. The intuition is that \(\tau \) models a timeout, to deal with the fact that CV might not be stipulated. When we check the honesty of P with Maude, we obtain: This means that the process P is dishonest: actually, the output provides a state where A is not ready. There, A must do ok in session \(y\) ($1), while A is only ready to do a -reject at session \(x\) ($0). This problem occurs when the branch R is chosen. To recover honesty, it suffices to replace R with the following process R’:

## 6 Conclusions

We have described an executable specification in Maude of a calculus for contract-oriented systems. This has been done in two steps. First, we have specified a model for contracts, and we have formalised in Maude their semantics, and the crucial notions of compliance and culpability (Sect. 2). This specification has been exploited in Sect. 3 to implement in Maude the calculus CO_{2} [4]. Then, we have considered the problem of honesty [5], i.e. that of deciding when a participant always respects the contracts she advertises, in all possible contexts (Sect. 4). Writing honest processes is not a trivial task, especially when multiple sessions are needed for realising a contract (see e.g. Example 4 and Example 5). We have then devised a sound verification technique for deciding when a participant is honest, and we have provided an implementation of this technique in Maude (Sect. 5).

*Related work.* Rewriting logic [12] has been successfully used for more than two decades as a semantic framework wherein many different programming models and logics are naturally formalised, executed and analysed. Just by restricting to models for concurrency, there exist Maude specifications and tools for CCS [17], the \(\pi \)-calculus [16], Petri nets [15], Erlang [14], Klaim [18], adaptive systems [7], *etc.* A more comprehensive list of calculi, programming languages, tools and applications implemented in Maude is collected in [13].

The contract model presented in Sect. 2 is a refined version of the one in [5], which in turn is an alternative formalisation of the one in [8]. Our version is simpler and closer to the notion of *session behaviour* [1], and enjoys several desirable properties. Theorem 1 establishes that only one participant may be culpable in a bilateral contract, whereas in [5] both participants may be culpable, e.g. in \({\mathsf {A}} \; says \;{\mathsf{a } \, ; \, {c}} \mid {\mathsf {B}} \; says \;{{\bar{\mathsf{a }}} \, ; \, {d}}\). In our model, if both participants have an internal (or external) choice, then their contracts are *not* compliant, whereas e.g. \(\mathsf a .c\) and \(\bar{\mathsf{a }}.d\) (both external choices) are compliant in [5, 8] whenever \(c\) and \(d\) are compliant. The exculpation property established by Theorem 2 is stronger than the corresponding one in [5]. There, a participant \(\mathsf {A}\) is guaranteed to exculpate herself by performing (at most) two consecutive actions *of* \(\mathsf {A}\), while in our model two any actions (of *whatever* participant) suffice.

As far as we know, the concept of *contract-oriented computing* (in the meaning used in this paper) has been introduced in [6]. CO_{2}, a contract-agnostic calculus for contract-oriented computing, has been instantiated with several contract models — both bilateral [3, 5] and multiparty [4, 11]. Here we have instantiated it with the contracts in Sect. 2. A minor difference w.r.t. [3, 5, 11] is that here we no longer have \(\mathsf {fuse}{}{}\) as a language primitive, but rather the creation of fresh sessions is performed non-deterministically by the context (rule [Fuse]). This is equivalent to assume a contract broker which collects all contracts, and may establish sessions when compliant contracts are found. In [5], a participant \(\mathsf {A}\) is considered honest when, in each possible context, she can always exculpate herself by a sequence of \(\mathsf {A}\)-solo moves. Here we require that \(\mathsf {A}\) is ready (i.e. some of her obligations are in the Weak Ready Do set) in all possible contexts, as in [3]. We conjecture that these two notions are equivalent. In [3] a type system has been proposed to safely over-approximate honesty. The type of a process \(P\) is a function which maps each variable to a *channel type*. These are behavioural types (in the form of Basic Parallel Processes) which essentially preserve the structure of \(P\), by abstracting the actual prefixes as “non-blocking” and “possibly blocking”. The type system relies upon checking honesty for channel types, but no actual algorithm is given for such verification, hence type inference remains an open issue. In contrast, here we have directly implemented in Maude a verification algorithm for honesty, by model checking the abstract semantics in Sect. 5.

## Notes

### Acknowledgments

This work has been partially supported by Aut. Region of Sardinia under grants L.R.7/2007 CRP-17285 (TRICS) and P.I.A. 2010 project “Social Glue”, and by MIUR PRIN 2010-11 project “Security Horizons”, and by EU COST Action IC1201 “Behavioural Types for Reliable Large-Scale Software Systems” (BETTY).

## References

- 1.Barbanera, F., de’Liguoro, U.: Two notions of sub-behaviour for session-based client/server systems. In: PPDP (2010)Google Scholar
- 2.Bartoletti, M., Murgia, M., Scalas, A., Zunino, R.: Modelling and verifying contract-oriented systems in Maude. http://tcs.unica.it/software/co2-maude
- 3.Bartoletti, M., Scalas, A., Tuosto, E., Zunino, R.: Honesty by typing. In: Beyer, D., Boreale, M. (eds.) FORTE 2013 and FMOODS 2013. LNCS, vol. 7892, pp. 305–320. Springer, Heidelberg (2013)CrossRefGoogle Scholar
- 4.Bartoletti, M., Tuosto, E., Zunino, R.: Contract-oriented computing in \({\rm Co}_2\). Sci. Ann. Comp. Sci.
**22**(1), 5–60 (2012)MathSciNetGoogle Scholar - 5.Bartoletti, M., Tuosto, E., Zunino, R.: On the realizability of contracts in dishonest systems. In: Sirjani, M. (ed.) COORDINATION 2012. LNCS, vol. 7274, pp. 245–260. Springer, Heidelberg (2012)CrossRefGoogle Scholar
- 6.Bartoletti, M., Zunino, R.: A calculus of contracting processes. In: LICS (2010)Google Scholar
- 7.Bruni, R., Corradini, A., Gadducci, F., Lluch Lafuente, A., Vandin, A.: Modelling and analyzing adaptive self-assembly strategies with maude. In: Durán, F. (ed.) WRLA 2012. LNCS, vol. 7571, pp. 118–138. Springer, Heidelberg (2012)CrossRefGoogle Scholar
- 8.Castagna, G., Gesbert, N., Padovani, L.: A theory of contracts for web services. ACM Trans. Program. Lang. Syst.
**31**(5), 1–61 (2009)CrossRefGoogle Scholar - 9.Clavel, M., Durán, F., Eker, S., Lincoln, P., Martí-Oliet, N., Meseguer, J., Quesada, J.F.: Maude: Specification and programming in rewriting logic. In: TCS (2001)Google Scholar
- 10.Honda, K., Vasconcelos, V.T., Kubo, M.: Language primitives and type discipline for structured communication-based programming. In: Hankin, C. (ed.) ESOP 1998. LNCS, vol. 1381, pp. 122–138. Springer, Heidelberg (1998)CrossRefGoogle Scholar
- 11.Lange, J., Scalas, A.: Choreography synthesis as contract agreement. In: ICE (2013)Google Scholar
- 12.Meseguer, J.: Rewriting as a unified model of concurrency. In: Baeten, J.C.M., Klop, J.W. (eds.) CONCUR 1990. LNCS, vol. 458, pp. 384–400. Springer, Heidelberg (1990)Google Scholar
- 13.Meseguer, J.: Twenty years of rewriting logic. JLAP
**81**(7–8), 721–781 (2012)zbMATHMathSciNetGoogle Scholar - 14.Neuhäußer, M., Noll, T.: Abstraction and model checking of core Erlang programs in Maude. ENTCS
**176**(4), 143–163 (2007)Google Scholar - 15.Stehr, M.-O., Meseguer, J., Ölveczky, P.C.: Rewriting logic as a unifying framework for Petri Nets. In: Ehrig, H., Juhás, G., Padberg, J., Rozenberg, G. (eds.) APN 2001. LNCS, vol. 2128, pp. 250–303. Springer, Heidelberg (2001)CrossRefGoogle Scholar
- 16.Thati, P., Sen, K., Martí-Oliet, N.: An executable specification of asynchronous pi-calculus semantics and may testing in Maude 2.0. In: ENTCS 71 (2002)Google Scholar
- 17.Verdejo, A., Martí-Oliet, N.: Implementing CCS in Maude 2. In: ENTCS 71 (2002)Google Scholar
- 18.Wirsing, M., Eckhardt, J., Mühlbauer, T., Meseguer, J.: Design and analysis of cloud-based architectures with KLAIM and maude. In: Durán, F. (ed.) WRLA 2012. LNCS, vol. 7571, pp. 54–82. Springer, Heidelberg (2012)CrossRefGoogle Scholar
- 19.Şerbănuţă, T.F., Roşu, G., Meseguer, J.: A rewriting logic approach to operational semantics. Inf. Comput.
**207**(2), 305–340 (2009)CrossRefzbMATHGoogle Scholar