Pabble: parameterised Scribble

Many parallel and distributed message-passing programs are written in a parametric way over available resources, in particular the number of nodes and their topologies, so that a single parallel program can scale over different environments. This article presents a parameterised protocol description language, Pabble, which can guarantee safety and progress in a large class of practical, complex parameterised message-passing programs through static checking. Pabble can describe an overall interaction topology, using a concise and expressive notation, designed for a variable number of participants arranged in multiple dimensions. These parameterised protocols in turn automatically generate local protocols for type checking parameterised MPI programs for communication safety and deadlock freedom. In spite of undecidability of endpoint projection and type checking in the underlying parameterised session type theory, our method guarantees the termination of end point projection and type checking.


Introduction
Message-passing is becoming a dominant programming model, as witnessed in application programs from high performance computing scaling over thousands of cores or cloud-based scalable backends of popular web services.These are environments where services are dynamically provided, through choreography of interactions among numerous distributed components.Assuring safety of concurrent software in these environments is a vital concern: many message-passing libraries, programs and systems are shared and long-lived, and some process sensitive data, so that safety violations such as deadlocks and incompatible messaging patterns or data payloads between senders and receivers can have catastrophic and unexpected consequences [13].
Our proposal for safety assurance for message-passing programs is based on multiparty session types [16].The methodology considers the specification of a global interaction protocol among multiple participants, from which we can derive a local protocol for an individual participant.Once each program is type-checked against its local protocol, a set of typed programs is guaranteed to run without deadlock or communication mismatches.We based our work on [23], where the authors proposed a programming framework for message-passing parallel algorithms, centring on explicit, formal description of global protocols, and examined its effectiveness through an implementation of a toolchain for the C language.The toolchain uses a language Scribble [15,27] for describing the multiparty session types in a Java-like syntax.A simple example of a protocol in Scribble which represents a ring topology between four workers is given below: A Scribble protocol starts from the keyword global protocol, followed by the protocol name, Ring.The role declarations are then passed as parameters of the protocol, which are Worker1 through to Worker4.The Ring protocol describes a series of communications in which Worker1 passes a message of type Data(int) to Worker4 by forwarding through Worker2 and Worker3 in that order, and receives a message from Worker4.It is easy to notice that explicitly describing all interactions among distinct roles is verbose and inflexible: for example, when extending the protocol with an additional role Worker5, we must rewrite the whole protocol.On the other hand, we observe that these worker roles have identical communication patterns which can be logically grouped together: Worker i+1 receives a message from Worker i and the last Worker sends a message to Worker 1 .In order to capture these replicable patterns, we introduce an extension of Scribble with dependent types called Parameterised Scribble (Pabble).In Pabble, multiple participants can be grouped in the same role and indexed.This greatly enhances the expressive power and modularity of the protocols.Here 'parameterised' refers to the number of participants in a role that can be changed by parameters.
The following shows our ring example in the syntax of Pabble.role Worker [1..N] declares workers from 1 to an arbitrary integer N. The Worker roles can be identified individually by their indices, for example, Worker [1] refers to the first and Worker[N] refers to the last.In the body of the protocol, the sender, Worker[i:1..N -1], declares multiple Workers, bound by the bound variable i, and iterates from 1 to N-1.The receivers, Worker[i+1], are calculated on their indices for each instances of the bound variable i.The second line is a message sent back from Worker[N] to Worker [1].The above code shows the local protocol of Ring, projected with respect to the parameterised Worker role.The projection for a parameterised role, such as Worker [1..N], will give a parameterised local protocol.It represents multiple endpoints in the same logical grouping.

Challenges
The main technical challenge for the design and implementation of parameterised session types is to develop a method to automatically project a parameterised global protocol to a parameterised local protocol ensuring termination and correctness of the algorithm.
Unfortunately, as in the indexed dependent type theory in the λ-calculus [5,33], the underlying parameterised session type theory [12] has shown that the projection and type checking with general indices are undecidable.Hence there is a tension between termina-tion and expressiveness to enable concise specifications for complex parameterised protocols.
Our main approach to overcome these challenges is to make the theory more practical by extending Scribble with index notation originating from a widely used text book for modelling concurrent Java [19].For example, notations Worker[i:1..N-1] and Worker[j+i] in the Ring protocol are from [19].Interestingly, this compact notation is not only expressive enough to represent representative topologies ranging from parallel algorithms to distributed web services, but also offers a solution to cope with the undecidability of parameterised multiparty session types.

Overview
Fig. 1 shows the relationships between the three layers: global protocols, local protocols and implementations.
(1) A programmer first designs a global protocol using Pabble.(2) Then our Pabble tool automatically projects the global protocol into its local protocols.(3) The programmer then either implement the parallel application using the local protocol as specification, or type-check existing parallel applications against the local protocol.If the communication interaction patterns in the implementations follow the local protocols generated from the global protocol, this method automatically ensures deadlock-free and type-safe communication in the implementation.In this work we focus on the design and implementation of the language for describing parallel message-passing based interaction as global and local protocols in (1) and (2), and outline how a Pabble local type checker for MPI (3) can be implemented.
This article presents a full version of the work published in [22] which had a particular focus on modelling and expressing communication topologies in parallel applications.Apart from including the detailed proofs for the well-formedness conditions and a number of additional examples, we include use cases from web services and large scale distributed cyber-infrastructures to show the flexibility of the Pabble language for compact parametric protocols outside the field of high performance parallel applications.We also expand the related work for a more thorough survey and discussion on formal verification with MPI-based parallel applications.
The contributions of this article are: -The first design and implementation of parameterised session types in a global protocol language (Pabble) ( § 2.2).The protocols can represent complex topologies with arbitrary number of participants, Additional use cases of Pabble such as common interaction patterns for high performance computing described in Dwarfs [4] can be found on the project web page [24].We also outline a methodology for type checking source code written with MPI against Pabble protocols in § 4.
2 Pabble: Parameterised Scribble Scribble [27] is a developer friendly notation for specifying application-level protocols based on the theory of multiparty session types [6,16].This section introduces an evolution of Scribble with parameterised multiparty session types (Pabble), defines its endpoint projection and proves its correctness.

The Pabble protocol language
The core elements of a Pabble protocol are interaction statements, choices and iterations.These are features common also to the Scribble language, which Pabble is extended from.Hence, Scribble protocols are compatible with Pabble, but the most expressive features such as role parameterisation can only be found in Pabble.
Interaction statements describe the messages passed between distributed participants of the protocol.For example, in the Ring protocol below, Data(int)from Worker [1] to Worker [2]; is an interaction statement which sends a message from participant (called a role) Worker [1] to another participant Worker [4].The participant are declared in the protocol as arguments of the protocol, role Worker [1..4].The subscripting notation of the roles are for indexing the participants, and will be explained in details the next section.The message has a label, Data, which may be omitted from the interaction statement.The message also contains a type name as parameters to the label, e.g.int, called a payload type.The payload type represents the data type of the message being sent.Data(int) from Worker [1] to Worker [2]; 4 Data(int) from Worker [2] to Worker [3]; 5 Data(int) from Worker [3] to Worker [4]; 6 Data(int) from Worker [4] to Worker [1]; where each of the branches is an alternative interaction sub-pattern which the participants can collectively select.The deciding role sends a label (e.g.Choice0) to other roles involved with the choice to notify them of the branch taken.
Iterations (loops) in the interaction patterns are written as recursion blocks (rec Label { }), with continue Label; statement to jump back to beginning of recursion.

Global protocols
Fig. 2 lists the core syntax of Pabble, which consists of two protocol declarations, global and local.A global protocol is declared with the protocol name (str denotes a string) with role and group parameters followed by the body G. Role R is a name with argument expressions.The argument expressions are ranges or arithmetic expressions h, and the number of arguments corresponds to the dimension of the array of roles: for example, Worker [1..4
Declared roles can be grouped by specifying a named group using the keyword group, followed by the group name and the set of roles.For example, group EvenWorker={Worker [2][2], Worker [4][2]} creates a group which consists of two Workers.A special built-in group, All, is defined as all processes in a session.We can encode collective operators such as many-to-many and many-to-one communication with All, which will be explained later.
Apart from specifying ranges explicitly, ranges can also be specified using expressions.Expression e consists of the usual operators for numbers, logarithm, left and right logical shifts (<<, >>), numbers, variables (i, j, k), and constants (M, N).Constants are either bound outside the protocol declaration or are left free (unbound) to represent an arbitrary number.As in [19], when the constants are bound, they are declared by numbers outside the protocol, e.g.const N = 100 or lower and upper bounds, e.g.const N = 1..1000.We also allow leaving the declaration free (unbound), e.g.const N, as a shorthand to represent an arbitrary constant with lower and upper bounds 0 and max respectively, i.e. const N = 0..max, where max is a special value representing the maximum possible value or practically unbounded.Binding range expression b takes the form of i : e 1 ..e n which means i is ranged from e 1 to e n .Binding variables always bind to a range expression and not individual values.We shall explain the use of binding range expressions later in more details.
In a global protocol G, l(T ) from R 1 to R 2 is called an interaction statement, which represents passing a message with label l and type T from one role R 1 to another role R 2 .R 1 is a sender role and R 2 is a receiver role.choice at R {G 1 } or . . .or {G n } means the role R will select one of the global types G 1 ,. . .,G n .rec l {G} is recursion with the label l which declares a label for continue l statement.foreach(b){G} denotes a for-loop whose iteration is specified by b.For example, foreach(i : 1..n){G} represents the iteration from 1 to n of G where G is parameterised by i.
Finally, allreduce op c (T ) means all processes perform a distributed reduction of value with type T with the operator op c (like MPI_Allreduce in MPI).It takes a mandatory predefined operator op c where op c must be a commutative and associative arithmetic operation.Pabble currently supports sum and product.
We allow using simple expressions (e.g.Worker[i :0..2*N-1]) to parameterise ranges.In addition, indices can also be calculated by expressions on bound variables (e.g.Worker[i+1]) to refer to relative positions of roles.
These restrictions on indices such as bound variables and relative indices calculations ensure termination of the projection algorithm and type checking.The binding conditions are discussed in the next subsection.

Local protocols
Local protocol L consists of the same syntax of the global type except the input from R (receive) and the output to R (send).The main declaration local protocol str at R e (. . .){L} means the protocol is located at role R e .We call R e the endpoint role.In Pabble, multiple local protocol instances can reside in the same parameterised local protocol.This is because each local protocol is a local specification for a participant of the interaction.Where there are multiple participants with a similar interaction structure that fulfil the same role in the protocol, such as the Workers from our Ring example from the introduction, the participants are grouped together as a single parameterised role.The local protocol for a collection of participants can be specified in a single parameterised local protocol, using conditional statements on the role indices to capture edge cases.For example, in a general case of a pipeline interaction, all participants receives from a neighbour and send to another neighbour, except the first participant which initiates the pipeline and is only a sender and the last participant which ends the pipeline and does not send.In these cases we use conditional statements to guard the input or output statements.To express conditional statements in local protocols, if R may be prepended to input or output statement.if R input/output statement will be ignored if the local role does not match R.More complicated matches can be performed with a parameterised role, where the role parameter range of the condition is matched against the parameter of the local role.For example, if Worker[1..3] will match Worker [2] but not Worker [4].It is also possible to bind a variable to the range in the condition, e.g. if Worker[ i:1..3], and i can be used in the same statement.

Well-formedness conditions: index binding
As Pabble protocols include expressions in parameters, a valid Pabble protocol is subject to a few well-formedness conditions.Below we show the conditions which ensure indices used in roles are correctly bounded.We use fv/bv to denote the set of free/bound variables defined as fv(i) = {i}, fv(N) = fv(num) = ∅ and fv(i : G}}} with n ≥ 0: (a) Suppose an interaction statement l(T is either a single participant or group).
(1) n = m (i.e. the dimensions of the parameters are the same) (2) fv(h j ) ⊆ ∪bv(b i ) (i.e. the free variables in the sender roles are bound by the for-loops).( 3) fv(e j ) ⊆ (∪bv(b i ))∪bv(h j ) (i.e. the free variables in the receiver roles are bound by either the for-loops or sender roles); Then R is a single participant, i.e. either Role or Role[e] with fv(e) ⊆ (∪bv(b i )).
Condition 2(a)(1) ensures the number of sender parameters matches the number of receiver parameters.For example, the following is invalid: Condition 2(a)(2) ensures variables used by a sender are declared by the enclosing for-loops.
Condition 2(a)(3) makes sure the receiver parameter at the j-th position is bound by the for-loops or the sender parameter at the j-th position (and not binders at other positions).For example, the following is valid: But with the index swapped, it becomes invalid: Condition 2(b) is similar for the case of choice statements where R should be a single participant to satisfy the unique sender condition in [9,11].
Lower and upper bound constraints are designed for runtime constants, e.g. the number of processes spawned in a scalable protocol, which is unknown at design time and will be defined and immutable once the execution begins.To ensure Pabble protocols are communicationsafe in all possible values of constants, we must ensure that all parameterised role indices stay within their declared range.Such conditions prevent sending or receiving from an invalid (non-existent) role which will lead to communication mismatch at runtime.In case (1), the check is trivial.In case (2), we require a general algorithm to check the validity between multiple constraints appeared in the regions.First, we formulate the constraints of the values of the constants as a series of linear inequalities.We then combine the linear inequalities and determine the feasible region using standard linear programming.The feasible region represents the pool of possible values in any combination of the constraints.The following explains how to determine whether the protocol will be valid for all combinations of constants: The basic constraints from the constants are: Since the objective is to ensure that the role parameters in the protocol body (i.e.1..M and 2..M+1) stay within the bounds of 1..N, we define a constraint set to be: which are lower and upper bound inequalities of the two ranges.From them, we obtain this inequality as a result: By comparing this against the basic constraints on the constants, we can check that not all outcomes belong to the regions and thus this is not a communication-safe protocol (an example of a unsafe case is M = 3 and N = 2).On the other hand, if we alter Line 4 to T from R [i:1..N-1] to R[i+1];, the constraints are unconditionally true and so we can guarantee all combinations of constants M and N will not cause communication errors.
Arbitrary constants In addition to constant values and lower and upper bound constants, we also consider the use cases when the value of a constant can be any arbitrary value in the set of natural numbers.This is an extension of case (2) with the max keyword, where we write const N = 0..max to represent a range without upper bound.
In order to check that role indices are valid with unbounded ranges, we enforce two simple restrictions.First, only one constant can be defined with max in one global protocol.Secondly, when the index is unbounded, its range calculation only uses addition or subtraction on integers (e.g.i+1).
A protocol with an invalid use of arbitrary constants is shown below: On the other hand, the following protocol is valid since the indices always stay between 0 and N.
We have shown in [24], most of representative topologies with the arbitrary number of participants can be represented under these conditions.

Endpoint projection
In the next step, a Pabble protocol should be projected to a local protocol, which is a simplified Pabble protocol as viewed from the perspective of a given endpoint.The projection algorithm is explained below.To begin with, the header of the global protocol where the protocol name name and parameters param are preserved and the endpoint role R e is declared.
Table 1 shows the projection of the body of global protocol G onto R at endpoint role R e .The projection rules will be applied from top to bottom in the table, if a global protocol matches multiple rules, then there will be more than one line of projected protocol for a single global protocol.In Rules 1-4, we show the rule for the single argument as the same rule is applied to narguments.Each rule is applied if R meets the condition in the second column under the constraints given by the constant declarations.Rules 1 and 2 show the projection of the interaction statement when R appears in the receiver and the sender position respectively.Since R is a single participant, it should satisfy R = R e (i.e. the role is the endpoint role).The projection simply removes the reference to role R from the original interaction statement.
Rules 3 and 4 show the projection of an interaction statement if role R is a parameterised single participant where R is an element of the endpoint role R e .For example, if R e = Worker[1..3], R can be either Worker [1], Worker [2] or Worker [3].In addition to removing the reference of role R in the receive and send statements, we also prepend the conditions which the role applies.The order of which the projection rules are applied ensure that an interaction statement will be localised to receive then send.In general, both receivesend or send-receive in the projected local protocol are correct, as long as the projection algorithm is consistent and the well-formedness conditions of the global protocol are satisfied.The global protocol will ensure, by session typing, that a send will have a matching receive at the same stage of the protocol.
Rule 5 is for All-to-All communication.Any role R will send a message with type U to all other participants and will receive some value with type U from all other participants.Since all participants start by first sending a message to all, no participant will block waiting to receive in the first phase, so no deadlock occurs.
Rules 6 and 7 are the projection rules for the case that we project onto a group.We need to check that a group is a subset of the endpoint role R e with respect to the group declarations in the global protocol.Then the rules can be understood as Rules 3 and 4.
Rules 8 and 9 show the projection of interaction statements with parameterised roles using relative indexing (we show only one argument: the algorithm can be extended easily to multiple arguments using the same methods).Rule 8 uses two auxiliary transformations of expressions, apply and inv.Table 2 lists their examples.apply takes two arguments, a range with binding variable (b) and an expression using the binding variable (e).The expression is applied to both ends of the range to transform the relative expression into a well defined range.inv calculates the inverse of a given expression, for example, the inverse of i+1 is i-1 and the inverse of i*2+1 is (i-1)/2.In cases when an inverse expression cannot be derived, such as i%2, the expression will be calculated by expanding to all values in the range and instantiating every value bound by its binding variable (e.g.i).
A concrete example is given as follows, to project the statement the statement will be expanded to U from W [1] to W[0]; U from W [2] to W [1]; U from W [3] to W[0]; before applying the projection rules.In order to perform the range expansion above, the beginning and the end of the range must be known at projection time.For this reason, the projection algorithm returns failure if a statement uses parameterised roles with such expressions and the range of the expressions is defined with arbitrary constants (see § 2.4).Otherwise, the expressions might expand infinitely and not terminate.This is the only situation which projection may fail, given a well-formed global protocol.The condition R[b] ⊆ R e of Rule 9 means the range of b is within the range of the endpoint role R e .For example, If a projection role matches the choice role (R in choice at R) (Rule 10), then it means a selection statement, whose action is selecting a branching by sending a label.The child or-blocks (L 1 . . .L N ) are recursively projected; whereas if a projection role does not match the choice role (Rule 11), then the choice statement represents a branch statement, which is the dual of the selection.For recursion (Rule 12), continue (Rule 13) and foreach (Rule 14) statements are just kept in the projected endpoint protocol.

Collective operations
In addition to point-to-point message-passing, collective operations can also be concisely represented by Pabble.Endpoint message-passing statements are interpreted differently depending on the declarations (i.e.parameters) in the global type.

Correctness and termination of the projection
The parameterised session theory which Pabble is based on [12] has shown that, in the general case, projection and type checking are undecidable.Our first challenge for Pabble's design is to ensure the termination of wellformed checking and projection, without sacrificing the expressiveness.The theorems and proofs can be found in this section.
Theorem 1 (termination) Given global protocol G, the well-formed checking terminates; and given a wellformed global type G and an endpoint role R e , projection G on R e always terminates.
Proof By the definition of the well-formedness conditions in § 2.3, if a free variable appears in the range position, it is bound by either for-loops or the sender role in the interaction statement.In the case of the for-loop, we can apply the same reduction rules of the for-loop of the global types from § 2 and apply the equality rules in [12,Figure 15].Hence one can check, given R e and R, all of the conditions (in the second column) in Table 1 are decidable.For the projection, the only non-trivial projection rule is Rule 8.The termination of this rule is ensured by the termination of apply(b,e) and inv( e).
If inv(e) is not defined, we first check e has the finite range and use Rule 3 and 4 by expanding the interaction statements to all values in the range (as explained in § 2.5).Hence the projection algorithm always terminates.
Note that the above theorem implies the termination of type checking (see Theorem 4.4 in [12]).
One of benefits of using Pabble is that it provides the expressiveness required to be able represent collective interactions in MPI.The correctness of projections of these protocols is ensured by the projection rule of the groups in [10].The special case of U from All to All follows the asynchronous subtyping rules in [21].The correctness property which relates to ranges of Pabble follows: Theorem 2 (range) The indices of roles appearing in a local protocol body do not exceed the lower and upper bounds stated in the global protocol ProtocolName(para) Fig. 3: Point-to-point communication and Pabble representation.

All-to-all pattern
Pabble role declarations:  Proof If the range relies on case (2), the correctness is ensured by linear programming.Other cases are straightforward since each condition in Table 1 checks whether roles conform to the bounds in the global protocol.We now run through the projection of the Ring protocol in § 1 as an example.Local protocols are generated from the global protocols.From the perspective of a projection tool, to write a protocol for an endpoint, we start with local protocol followed by the name of the protocol and the endpoint role it is projected for.Since the only role of the Ring protocol is Worker which is a parameterised role, we use the full definition of the parameterised role, Worker[1..N].Then we list the roles used in the protocol inside a pair of parentheses, similar to function arguments in a function definition in C. Note that if the projection role is in the list, we exclude it because the local protocol itself is in the perspective of that role; however, since parameterised roles can be used on multiple endpoint roles, we allow parameterised roles to appear in the list of roles in the protocol.The first line of the projected protocol is thus given as follows: 3. The suppliers interact with their manufacturers to build the quotes for the buyer, which is then sent back to the buyer 4. (a) Either the buyer agrees with the quotes and place the orders (b) Or the buyer modify the quote and send back to the suppliers 5.In the case the supplier received an updated quote request (4b) (a) Either the supplier respond to updated quote request by agreeing to it and sending a confirmation message back to buyer (b) Or the supplier respond to the update quote request by modifying it and sending back to buyer and the buyer goes back to step 4 (c) Or the supplier respond to the update quote request by rejecting it (d) Or the supplier renegotiate with the manufacturers, in which case we return to step 3 Fig. 8 shows the interactions between different components in the Quote-Request use case.We set the generic number S for Suppliers and M for Manufacturers.The interactions are described as a Pabble global protocol in Listing 4. In the protocol, we omitted the implicit requestIdType from the payload type in all of the messages which keeps track of states of each role in the stateless web transport.
The Buyer initiates the quote request on Line 2, when it broadcasts a Quote() message to all Suppliers.Then on Line 4-7 each of the Supps forward the quote requests to their respective Manufacturers, and get a reply from each of them by a series of gather and scatter interactions.Next, the Suppliers reply to the Buyer on Line 9, and the Buyer then decides between accepting the offer straight away (Line 14, outcome 4a), or sending a modified quote request (Line 17, outcome 4b).If a Supp received a modified quote, it decides between accepting the modified quote (Line 21, outcome 5a), rejecting the modified quote straight away (Line 29, outcome 5c) or modifying the quote and renegotiating with Buyer (Line 24, outcome 5b).It is also possible that the Supplier renegotiates with its Manufacturers again, so it notifies the Buyer and returns back to the initial negotiation phase (Line 32, outcome 5d).The projected endpoint protocol for Buyer is Listing 5.

Use case: RPC Composition
We present a use case from the Ocean Observatories Initiative project [1].The use case describes a highlevel Remote Procedural Call (RPC) request/response protocol between layers of proxy services.An applica- tion sends a request to a high-level service, and the service is expected to reply to the application with a result.If the service does not provide the requested service, then this high-level service will issue a request to a lower level service which can process the request.This request-response protocol is chained between services in each level until a low-level service is reached.Fig. 9 describes the chaining of RPC-style request/response protocol.A request is routed to the most relevant service provider through multiple proxy services, hidden from higher level services.The request routes through a multi-hop path from the requester to the resources.The reply is routed in reverse through the same participant proxy services back to the requester.
We represent this series of interactions using a Pabble protocol outlined below.The participants, Service [1..N], represents a proxy service in each of the levels.Service [1] is the requester and Service[N] is the actual service provider.A Request() message is sent from a Service to the Service in the level directly below, until it reached Service[N] which will process the request and reply to the higher level service with a Response().Using a foreach loop with decrementing indices, the Response() is cascaded to the originating service, Service [1].The Pabble protocol is shown in Listing 6. Listing 6: RPC request/response chaining As the request and response phase are symmetric and involve the same participants, we are able to compact the multi-layer protocol to only using two foreach loops, each with one parameterised interaction statement.N can be an arbitrary constant to allow maximum flexibility in the protocol.This simple and con-Listing 9: MPI implementation for Solver protocol

Type checking
Given the local protocol and the implementation, we propose a session type checker to verify the conformance of the implementation against the projected local protocol.Conformance of endpoint programs against the projected protocol will yield communication-safe parallel programs.
Pabble local protocols have similar structure to that of MPI programs.Both Pabble protocols and MPI programs are designed such that a single source code representing multiple endpoints, a result of the Single Program Multiple Data (SPMD) parallel programming model.
The core communication primitives of MPI can correspond to Pabble statements, as demonstrated in Listing 9.In addition, collective operations such as broadcast (MPI_Bcast) or all-reduce (MPI_Allreduce) can be supported by the collective operation correspondence in § 2.5.
Challenges for a complete MPI type checker In [23], Ng et al. introduced a session type checker for a nonparameterised protocol language and a simple session programming API.We face a number of challenges when building a complete type checker using the same methodology for Pabble, which is a dependent protocol language and MPI, which is a standard parameterised implementation API.The Pabble language with its wellformedness checks reduces the undecidability issues in the role representation by using integer instead of general indices.The type checking process will compare the protocol against a simplified, canonical local protocol extracted from the implementation, which still posts a challenge in the process of protocol extraction.In particular, inferring source and destination processes from parametric source code is non-trivial.MPI uses process IDs (or ranks) to identify processes, and it is valid to perform numeric operations on the ranks to efficiently calculate target processes.This allows ways of exploiting the C language features while remaining a valid program.For example, instead of using a conventional conditional statement, an MPI function call of this form may be used: MPI_Send(buf, cnt, MPI_INT, rank%2?rank+1: rank-1, ...) where the process ID, rank, is being used as a boolean, thus a straightforward analysis of rank usages would not be sufficient.In order to correctly calculate target processes of the interactions, it will be necessary to simulate rank calculations by techniques such as symbolic execution or combinations of runtime techniques.

Conclusion
This article introduced a new global protocol description language, Pabble, and applied it to ensure deadlockfree and type-safe communications in parallel programs.
Local protocols projected from a parameterised global protocol and we outlined a methodology to specify and type check MPI parallel programs for safe parallel programs.Our global protocols and local protocols bring the expressiveness of Scribble to new levels, overcoming the issue of the underlying parameterised multiparty session type theory [12] by a careful design choice for indices based on [19].Combining with the multirole theory from [10], Pabble can represent and typecheck representative MPI collective operators.We are not aware of any prior framework which is uniformly applicable to a safety guarantee for message-passing parallel programs which run over complex topologies, through static, low-cost type checking as compared to model checking.
Through our examples presented in this article, we have showed that the Pabble language is not limited to high performance parallel applications.The examples, including web services and RPC, cover a broad category of interaction-centric scalable distributed applications.Our simple, formally-based language provides an approach for designing services and applications with safe interaction patterns.
6 Related work

Formal verification for parallel applications
Formal verification for message-passing parallel programming has been actively studied in the area of MPI parallel applications.A recent survey [13] summarises a wide range of model checking-based verification methods for MPI.Among them, ISP [31] is a dynamic verifier which applies model-checking techniques to identify potential communication deadlocks in MPI.Their tool uses a fixed test harness and in order to reduce the state space of possible thread interleavings of an execution, the tool exploits an independence between thread actions.Later in [32], the authors improved its scheduling policy to gain efficiency of the verification.While their approach aims to cover common deadlock patterns in MPI programs, it is still limited to a finite number of tests.Our approach does not rely on external testing, and all session typable programs are guaranteed communication-safe and deadlock-free by a low-cost static code generation and type checking.
TASS [29] is another tool that combines symbolic execution [28] and model checking techniques to verify safety properties of MPI programs.The tool takes a C/MPI application and an input n ≥ 1 which restricts the input space, then constructs an abstract model with n processes and checks its functional equivalence and deadlocks by executing the model of the application.TASS does not verify properties for an unbounded number of communication participants nor treat parameterisation, whereas we can work with message-passing programs where the number of participants is unknown at compile time, if they are written in well-formed, projectable Pabble.

Formally based MPI languages
Pilot [8] is a parallel programming library built on standard MPI to provide a simplified parallel programming abstraction based upon CSP.The communication is synchronous and channels are untyped to facilitate reuse for different types.The implementation includes an analyser to detect communication deadlock at runtime.Our proposed typechecker is static and is able to detect and prevent deadlocks before execution.
Interprocedural control flow graph (ICFG) [30] and parallel control flow graph (pCFG) [7] are techniques to analyse MPI parallel programs for potential message leak errors.Their approach extends a traditional data-flow analysis by connecting control-flow graphs of concurrent processes to their communication edges in order to derive the communication pattern and topology of a parallel program.They take a bottom-up engineering based approach, in contrast to our formally based, top-down global protocol approach, which can give a high-level understanding of the overall communication by design, in addition to the communication safety assurance by multiparty session types.

Parameterised multiparty session types
Previous work from Ng et al. [23] introduces a C programming framework based on multiparty session types (MPSTs), but it does not treat parameterisation.Hence the user needs to explicitly describe all interactions in the protocol, and the type checker does not work if the number of participants is unknown at compile time.Pabble's theoretical basis is developed in [12] where parameterised MPSTs are formalised using the dependent type theory of Gödel's System T .The main aim in [12] is to investigate the decidability and expressiveness of parameterisations of participants.Type checking in [12] is undecidable when the indices are not limited to decidable arithmetic subsets or the number of the loop in the parameterised types is infinite.The design of Pabble is inspired by the LTSA tool from a concurrency modelling text book used for the undergraduate teaching in the authors' university over two decades [19].The notations for parameterisations from the LTSA tool offers not only practical restrictions to cope with the undecidability of parameterised MPSTs [12], but also concise representations for parameterised parallel languages.Our work is the first to apply parameterised MPSTs in a practical environment and one foremost aim of our framework with Pabble and parameterised notation is to be developer friendly [27] without compromising the strong formal basis of session types.

Dependent typing systems
Liquid Type [26] is a dependent typing system to automatically infer memory safety properties from program source code without using verbose annotations.The work [25] introduced an analyser for the C language in the low-level imperative environment based on Liquid Types and refinement types.The recent work on Liquid Types [18] applied the tool with SMT solvers to assist parallelisation of code regions by determining statically whether parallel threads will run on disjoint shared memory without races.Our work applies dependent session types to guarantee different kinds of safety, communication safety and deadlock freedom, in explicit message-passing based distributed and parallel programming rather than shared memory concurrency.It is an interesting future topic to integrate with model-checking tools to handle projectability with more complex indices in addition to functional correctness of session programs.

Session-based approaches to parallel programming
A recent work [14,20] aims to use session types for deductive verification of MPI programs.A new type language is designed specifically for MPI and they used VCC, a concurrent C verifier tool to verify correctness of MPI against the type language.While the Pabble language was designed with influences from parallel programming APIs and parallel programming use cases, the language was designed to be an independent highlevel abstraction over distributed interactions.As a result, our language makes no assumption about the execution environment (e.g.collective loops in MPI), and allows Pabble to represent general protocols from distributed systems or Web services with distinct roles as shown in the examples.

Future work
Future works include extending Pabble and the underlying theory with support for modelling process creation and destroy, such as dynamic multirole approach described in [10].
A number of enhancements are planned for Pabble including support for annotations which can complement the protocol description to specify assertions.The type checking process can use the extra constraints or conditions provided to combine with model checkers to also assure functional correctness of the overall application.Annotations will also enable integration with runtime monitoring described in [17] for a combined static and dynamic approach to communication correct application using Pabble.
An approach to generate distributed parallel application is in the works, using a combination of Pabble protocol, which describes the interaction aspects of the application, and computation code, which describes the sequential computation behaviour of the application.

Fig. 3 -
6 lists the four basic messaging patterns and the interpretations of their projections: point-to-point, scatter (distribution), gather (collection) and all-to-all (symmetric distribution and collection).As shown in the Fig.s, the combination of projected local statements and the type (i.e.single participant or group role) of the local role being projected are unique and can identify the communication pattern in the global protocol.

3 2 rec LOOP { 3
Pabble examples In § 2.5.we describe how to obtain a local Pabble protocol by projection from a Pabble protocol.The local protocol can then be used as a blueprint to implement parallel programs.In this section we run through two examples of local protocol projection, using a Ring protocol in §3.1 and a MapReduce protocol in §3.2, showing projection of protocols involving point-to-point and multicast collective applications respectively.Then we present Pabble use cases in Web services in §3.3 and Remote Procedure Call (RPC) composition in §3.4,showing the capabilities of Pabble as a generalpurpose parameterised protocol description language.Finally we show an implementation of a parallel linear equation solver §3.5 in MPI following a wraparound mesh protocol designed in Pabble, demonstrating how Pabble can be used in practical programming.Additional Pabble examples from the Dwarfs [4] evaluation metric is available from our web page [24].3.1 Projection example: Ring protocol 1 global protocol Ring(role Worker[1..N]) { Data(int) from Worker[i:1..N-1] to Worker[i+1]; 4 Data(int) from Worker[N] to Worker[1];

Table 1 :
Projection of G onto R at the end-point role R e .
L and L i correspond to the projection of G and G i onto R.Range (b) Expr.(e) apply( b, e) inv( e)