Compiling Cooperative Task Management to Continuations

  • Keiko Nakata
  • Andri SaarEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8161)


Although preemptive concurrency models are dominant for multi-threaded concurrency, they may be criticized for the complexity of reasoning because of the implicit context switches. The actor model and cooperative concurrency models have regained attention as they encapsulate the thread of control. In this paper, we formalize a continuation-based compilation of cooperative multitasking for a simple language and prove its correctness.


Cooperative Multitasking Concurrency Model Concurrent Object Groups (COG) Original Entry Point Guard Expression 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

In a preemptive concurrency model, threads may be suspended and activated at any time. While preemptive models are dominant for multi-threaded concurrency, they may be criticized for the complexity of reasoning because of the implicit context switches. The programmer often has to resort to low-level synchronization primitives, such as locks, to prevent unwanted context switches. Programs written in such a way tend to be error-prone and are not scalable. The actor model [2] addresses this issue. Actors encapsulate the thread of control and communicate with each other by sending messages. They are also able to call blocking operations such as sleep, await and receive, reminiscent of cooperative multi-tasking. Erlang and Scala actors support actor-based concurrency models.

Creol [8] and ABS [7] combine a message-passing concurrency model and a cooperative concurrency model. In Creol, each object encapsulates a thread of control and objects communicate with each other using asynchronous method calls. Asynchronous method calls, instead of messages-passing, provide a type-safe communication mechanism and are a good match for object-oriented languages [3, 4]. ABS generalizes the concurrency model of Creol by introducing concurrent object groups [10] as the unit of concurrency. The concurrency model of ABS can be split in two: in one layer, we have local, synchronous and shared-memory communication1 in one concurrent object group (COG) and on the second layer we have asynchronous message-based concurrency between different concurrent object groups as in Creol. The behavior of one COG is based on the cooperative multitasking of external method invocations and internal method activations, with concrete scheduling points where a different task may get scheduled. Between different COGs only asynchronous method calls may be used; different COGs have no shared object heap. The order of execution of asynchronous method calls is not specified. The result of an asynchronous call is a future; callers may decide at run-time when to synchronize with the reply from a call. Asynchronous calls may be seen as triggers that spawn new method activations (or tasks) within objects. Every object has a set of tasks that are to be executed (originating from method calls). Among these, at most one task of all the objects belonging to one COG is active; others are suspended and awaiting execution. The concurrency models of Creol and ABS are designed to be suitable in the distributed setting, where one COG executes on its own (virtual) processor in a single node and different COGs may be executed on different nodes in the network.

In this paper, we are interested in the compilation of cooperative multi-tasking into continuations, motivated to execute a cooperative multi-tasking model on the JVM platform, which employs a preemptive model. The basic idea of using continuations to manage the control behavior of the computation has been known from 80’s [6, 12], and is still considered as a viable technique [1, 9, 11]. This is particularly so, if the programming language supports first-class continuations, as in the case of Scala, and hence one can obviate manual stack management. The contribution of the paper is a correctness proof of such a compilation scheme. Namely, we create a simplified source language, by extending the While language with (synchronous) procedure calls and operations for cooperative multi-tasking (i.e., blocking operations and creation of new tasks) and define a compilation function from the source language into the target language, which extends While with continuation operations—the target language is sequential. We then prove that the compilation preserves the operational behavior from the source language to the target language.

The remainder of the paper is organized as follows. We define the source language and its operational semantics in the next section, and the target language and its operational semantics in Section 3. In Section 4, we present the compilation function from the source language to the target language and, in Section 5 we prove its correctness. We conclude in Section 6.
Fig. 1

Syntax of the source language

2 Source Language

In Figure 1, we define the syntax for the source language. We use the overline notation to denote sequences, with \(\epsilon \) denoting an empty sequence. It is the While language extended with (local) variable definitions, \(\mathtt {var}~ x = e\), procedure calls \(f(\overline{e})\), the await statement \(\mathtt {await}~ e\), creation of a new task \(\mathtt{spawn } ~ f ~ \overline{e}\), and the return statement \(\mathtt {return} \). For simplicity, we syntactically distinguish local variable assignment \(x := e\) and global variable assignment \(u := e\). The statement \(\mathtt {var}~ x = e\) defines a (task) local variable \(x\) and initializes it with the value of \(e\). The statement \(f(\overline{e})\) invokes the procedure \(f\) with arguments \(\overline{e}\), to be executed within the same task. The procedure does not return the result to the caller, but may store the result in a global variable. The statement \(\mathtt {await}~ e\) suspends the execution of the current task, which can be resumed when the guard expression \(e\) evaluates to true. The statement \(\mathtt{spawn } ~ f ~ \overline{e}\) spawns a new task, which executes the body of \(f\) with arguments \(\overline{e}\). (Hence, in contrast to procedure calls, which are synchronous, \(\mathtt{spawn } ~ f ~ \overline{e}\) is like an asynchronous procedure call executed in a new task.) The \(\mathtt {return} \) statement is a runtime construct, not appearing in the source program, and will be explained later.

We assume disjoint supplies of local variables (ranged over by \(x\)), global variables (ranged over by \(u\)), and procedure names (ranged over by \(f\)). We assume a set of (pure) expressions, whose elements are ranged over by \(e\). We assume the set of values to be the integers, non-zero integers counting as truth and zero as falsity. The metavariable \(v\) ranges over values. We have two kinds of states – local states and global states. A local state, ranged over by \(\rho \), maps local variables to values; a global state, ranged over by \(\sigma \), maps global variables to values. We denotes by \(\emptyset \) an empty mapping, whose domain is empty. Communication between different tasks is achieved via global variables. For simplicity, we assume a fixed set of global variables. The notation \(\rho [x\mapsto v]\) denotes the update of \(\rho \) with \(v\) at \(x\), when \(x\) is in the domain of \(\rho \). If \(x\) is not in the domain, it denotes a mapping extension. The notation \(\sigma [u\mapsto v]\) denotes similar. We assume given an evaluation function \([\![e ]\!]_ {(\rho ,\sigma )}\), which evaluates \(e\) in the local state \(\rho \) and the global state \(\sigma \). We write \((\rho ,\sigma ) \models e\) and \((\rho ,\sigma ) \not \models e\) to denote that \(e\) is true, resp. false with respect to \(\rho \) and \(\sigma \). A stack \(\varPi \) is a non-empty list of local states, whose elements are separated by semicolons. A stack grows leftward, i.e., the leftmost element is the topmost element.

A program \(P\) consists of a procedure environment \({\mathrm{env }}_F\) which maps procedure names to pairs of a formal argument list and a statement, and a global state which maps global variables to their initial values. The entry point of the program will be the procedure named main.

We define the operational semantics of the source language as a transition system on configurations, in the style of structural operational semantics. A configuration \({ cfg }\) consists of an active task identifier \(n\), a global variable mapping \(\sigma \) and a set of tasks \(\varTheta \). A task has an identifier and may be in one of the three forms: a triple \(\langle e, S, \varPi \rangle \), representing a task that is awaiting to be scheduled, where \(e\) is the guard expression, \(S\) the statement and \(\varPi \) its stack; or, a pair \(\langle S, \varPi \rangle \), representing the currently active task; or, a singleton \(\langle \varPi \rangle \), representing a terminated task.
$$\begin{aligned} \begin{array}{lcl} Configuration &{} { cfg }&{}{:}{:}{=}\ n, \sigma \triangleright \varTheta \\ Task~sets &{} \varTheta &{}{:}{:}{=}\ n\langle e, S, \varPi \rangle \mid n\langle S, \varPi \rangle \mid n\langle \varPi \rangle \mid \varTheta \parallel \varTheta \end{array} \end{aligned}$$
The order of tasks in the task set is irrelevant: the parallel operator \(\parallel \) is commutative and associative. Formally, we assume the following structural equivalence:
$$\begin{aligned} \varTheta \equiv \varTheta \qquad \varTheta \parallel \varTheta ' \equiv \varTheta ' \parallel \varTheta \qquad \varTheta \parallel (\varTheta ' \parallel \varTheta '') \equiv (\varTheta \parallel \varTheta ') \parallel \varTheta '' \end{aligned}$$
Fig. 2

Semantics of the source language

Transition rules in the semantics are in the form \({\mathrm{env }}_F \vdash { cfg }\rightarrow { cfg }'\), shown in Figure 2. The first two rules (S-Cong and S-Equiv) deal with congruence and structural equivalence. The rules for assignment, \(\mathtt {skip} \), if-then-else and while are self-explanatory. For instance, in the rule S-Assign-Local, the task is of the form \(n\langle S, \varPi '\rangle \) where \(S = x:= e\) and \(\varPi ' = \rho ;\varPi \). Note that the topmost element of the stack \(\varPi \) is the current local state. The rules for sequential composition may deserve some explanation. If the first statement \(S_1\) suspends guarded by \(e\) in the stack \(\varPi '\) with the residual statement \(S'_1\) to be run when resumed, then the entire statement \(S_1;S_2\) suspends in \(\langle e, S'_1;S_2, \varPi '\rangle \), where the residual statement now contains the second statement \(S_2\) (S-Seq-Grd). If \(S_1\) terminates in \(\varPi '\), then \(S_2\) will run next in \(\varPi '\) (S-Seq-Fin). Otherwise, \(S_1\) transfers to \(S'_1\) with the stack \(\varPi '\), so that \(S_1;S_2\) transfers to \(S'_1;S_2\) with the same stack (S-Seq-Step). The await statement immediately suspends (S-Await) the currently active task, enabling us to switch to some other task in accordance to the scheduling rules. An example of the await statement (and the scheduling rules) at work can be found in the example in Figure 3. The statement \(\mathtt{spawn } ~ f ~ \overline{e}\) creates a new task \(n'\langle \mathtt {true} , S, [\overline{x} \mapsto \overline{v}]\rangle \) with \(n'\) a fresh identifier (S-Spawn). The caller task continues to be active. The newly created task is suspended, guarded by \(\mathtt {true} \), and may get scheduled at scheduling points by the scheduling rules (see below). Procedure invocation \(f(\overline{e})\) evaluates the arguments \(\overline{e}\) in the current state, pushes into the stack the local state \([\overline{x} \mapsto \overline{v}]\), mapping the formal parameters to the actual arguments, and transfers to \(S;\mathtt {return} \), where \(S\) is the body of \(f\) (S-Call). The \(\mathtt {return} \) statement pops the topmost element from the stack (S-Return). The local variable definition \(\mathtt {var}~ x = e\) extends the current local state with the newly defined variable and initializes it with the value of \(e\) (S-Var).

The last three rules deal with scheduling. If the current active task has terminated, then a new task whose guard evaluates to true is chosen to be active (S-Sched-Fin). When the active task suspends, a scheduling point is reached. The rule (S-Sched-Same) considers the case in which the same task is scheduled; the rule (S-Sched-Other) considers the case in which a different task is scheduled.
Fig. 3

The full execution trace of the example program

Fig. 4

Example of a derivation in the source language

As an example, we will look at a program containing one global variable \(u\) with the initial value \(0\) and the following procedures:
$$\begin{aligned} \mathsf{f }&\mapsto u := 1 \\ \mathsf{main }&\mapsto u := 3; \mathtt{spawn } ~ \mathsf {f} ~ \epsilon ; \mathtt {await}~ u = 1; u := 2 \end{aligned}$$
A detailed step, showing the full derivation, can be seen in Figure 4. A full execution trace, showing all intermediate configurations, is shown in Figure 3.

3 Target Language

We proceed to the target language. In Figure 5, we present the syntax of the target language. Expressions of the target language contain, besides pure expressions, continuations \([S, \varPi ]\), which are pairs of a statement \(S\) and a stack \(\varPi \), and support for guarded (multi)sets: collections which contain pairs of an expression and a value. The expression stored with each element is called a guard expression, and is evaluated when we query the set: only elements whose guard expressions hold may be returned. There are five expressions in the language to work with guarded sets: an empty set \(\emptyset \), checking whether a set is empty (\(\mathtt{isEmpty } ~ e_s\)), adding an element (\(\mathtt{add } ~ e_s ~ e_g ~ e\)), fetching an element (\(\mathtt{get } ~ e_s\)) and removing an element (\(\mathtt{del } ~ e_s ~ e\)).
Fig. 5

Syntax of the target language

Similar to the source language, for the target language we extend While with local variable definitions and procedure calls. We also add delimited control operators, \(\mathtt{shift~ } k ~ \{ S \}\), \(\mathtt{reset~ } \{ S \}\), \(\mathtt{invoke } ~k\) [5]. The statement \(\mathtt{shift~ } k ~ \{ S \}\) captures the rest of the computation, or continuation, up to the closest surrounding \(\mathtt{reset~ } \{ \}\), binds it to \(k\), and proceeds to execute \(S\). In \(\mathtt{shift~ } k ~ \{ S \}\), \(k\) is a binding occurrence, whose scope is \(S\). Hence, the statement \(\mathtt{reset~ } \{ S \}\) delimits the captured continuation. The statement \(\mathtt{invoke } ~k\) invokes, or jumps into, the continuation bound to the variable \(k\). The statement \(\mathtt{R~ } \{ S \}\), where \(\mathtt {R} \) is a new constant, is a runtime construct to be explained later.

The target language is sequential and, unlike the source language, it contains no explicit support for parallelism. Instead, we provide building blocks – continuations and guarded sets – that are used to switch between tasks and implement an explicit scheduler in Section 4.

We assume given an evaluation function \([\![e ]\!]_ {(\varPi ,\sigma )}\) for pure expressions, which evaluates \(e\) with respect to the stack \(\varPi \) (the evaluation only looks at the current local state, which is the topmost element of \(\varPi \)) and the global state \(\sigma \). In Figure 6, the evaluation function is extended to operations for guarded sets.
Fig. 6

Semantics of the target language

We define an operational semantics for the target language as a reduction semantics over configurations, using evaluation contexts. A configuration \(\langle S, \varPi , \sigma \rangle \) is a triple of a statement \(S\), a stack \(\varPi \) and a global state \(\sigma \).

Evaluation contexts \(E\) are statements with a hole, specifying where the next reduction may occur. They are defined by
$$\begin{aligned} E\ {:}{:}{=}\ [] \mid E; S \mid \mathtt{R~ } \{ E \} \end{aligned}$$
We denote by \(E[S]\) the statement obtained by placing \(S\) in the hole of \(E\).
We define basic reduction rules in Figure 6. \(\mathtt{reset~ } \{ S \}\) inserts a marker \(\dagger \) into the stack, just below the current state, and reduces to \(\mathtt{R~ } \{ S \}\) to continue the execution of \(S\) (T-Reset). We use a marker to delimit the portion of the stack captured by \(\mathtt{shift }\) and to align the stack when exiting from \(\mathtt{R~ } \{ \}\). The runtime construct \(\mathtt{R~ } \{ \}\) is used to record that the marker has been set. \(\mathtt{shift~ } k ~ \{ S \}\) captures the rest of the execution up to and including the closest surrounding \(\mathtt{R~ } \{ \}\) together with the corresponding portion of the stack, binds it to a fresh variable \(k'\) in the local state, and continues with the statement \(S'\) obtained by substituting \(k\) by \(k'\) in \(S\) (T-Shift). The surrounding \(\mathtt{R~ } \{ \}\) is kept intact. \(F\) is an evaluation context that does not intersect \(\mathtt{R~ } \{ \}\), formally,
$$\begin{aligned} F\ {:}{:}{=}\ [] \mid F; S \end{aligned}$$
Note that \(\mathtt{shift~ } k ~ \{ S \}\) captures the stack up to and including the topmost \(\dagger \), which has been inserted by the closest surrounding reset. Once the body of \(\mathtt{R~ } \{ \}\) terminates, i.e., reduces to \(\mathtt {skip} \), then we remove the \(\mathtt{R~ } \{ \}\) and pop the stack until the topmost \(\dagger \), but leaving the state just above \(\dagger \) in the stack (T-R). \(\mathtt{invoke } ~k\) invokes the continuation bound to \(k\) (T-Invoke). Namely, if \(k\) is bound to \([S, \varPi ']\) in the local or global state, then the statement reduces to \(S; \mathtt {return} \) and the stack \(\varPi '\) is pushed into the current stack. \(S\) must be necessarily of the form \(\mathtt{R~ } \{ S' \}\), where \(S'\) does not contain \(\mathtt{R }\), and \(\varPi '\) contains exactly one \(\dagger \) at the bottom. When exiting from the \(\mathtt{R~ } \{ \}\), the state immediately above \(\dagger \) in \(\varPi '\) will be left in the stack, which is popped by the trailing \(\mathtt {return} \). An example of how to capture and invoke a continuation is shown in Figure 7. In the example, we assume that the variables \(u\) and \(u'\) are global.
Fig. 7

Capturing and invoking a continuation

Procedure call \(f (\overline{e})\) reduces to \(S; \mathtt {return} \) where \(S\) is the body of the procedure \(f\), and pushes a local state \([\overline{x} \mapsto \overline{v}]\), binding procedure’s formal arguments to actual arguments, into the stack (T-Call). The trailing \(\mathtt {return} \) ensures that, once the execution of \(S; \mathtt {return} \) terminates, the stack is aligned to the original \(\varPi \). \(\mathtt {return} \) pops the topmost element from the stack (T-Return). The remaining rules are self-explanatory.

Given the basic reduction rules, we now define a standard reduction, denoted by \(\mapsto \), by
$$\begin{aligned} \frac{{\mathrm{env }}_F \vdash \langle S, \varPi , \sigma \rangle \rightarrow \langle S', \varPi ', \sigma ' \rangle }{{\mathrm{env }}_F \vdash \langle E [ S ], \varPi , \sigma \rangle \mapsto \langle E [ S' ], \varPi ', \sigma ' \rangle } \end{aligned}$$
stating that the configuration \(\langle S, \varPi , \sigma \rangle \) standard reduces to \(\langle S', \varPi ', \sigma '\rangle \) if there exist an evaluation context \(E\) and statement \(S_0\) and \(S'_0\) such that \(S = E[S_0]\) and \(S' = E[S'_0]\) and \({\mathrm{env }}_F \vdash \langle S_0, \varPi , \sigma \rangle \rightarrow \langle S'_0, \varPi ', \sigma '\rangle \). The standard reduction is deterministic.

4 Compilation

When compiling a program \(P\) into the target language, we compile expressions and statements according to the scheme shown in Figure 8. Expressions are translated into the target language as-is, and statements that have a corresponding equivalent in the target language are also translated in a straightforward manner. The two statements that have no direct correspondence in the target language are \(\mathtt{await }\) and \(\mathtt{spawn }\). We look at how these statements are translated and how they interact with the scheduler later.
Fig. 8

Compilation of source programs

The central idea of the compilation scheme is to use continuations to handle the suspension of tasks, and have an explicit scheduler (for brevity, in the examples we use \( Sched \) to denote the scheduler), shown in Figure 9. The control will pass to the scheduler every time a task either suspends or finishes, and the scheduler will pick up a new task to execute. During runtime, we also use a global variable \(T\), which we assume not to be used by the program to be compiled. The global variable \(T\) stores the task set, corresponding to \(\varTheta \) in the source semantics, that contains all the tasks in the system. The tasks are stored as continuations with guard expressions.
Fig. 9


The scheduler loops until the task set is empty (all tasks have terminated), in each iteration picking a continuation from \(T\) where the guard expression evaluates to \(\mathtt {true} \), removing it from \(T\) and then invoking the continuation using \(\mathtt{invoke } ~k\). The body of the scheduler is wrapped in a \(\mathtt {reset} \), guaranteeing that when a task suspends, the capture will be limited to the end of the current task. After the execution is completed – either by suspension or by just finishing the work – the control comes back to the scheduler.

Suspension (\(\mathtt {await}~ e\) in the source language) is compiled to a \(\mathtt {shift} \) statement. When evaluating the statement, the original computation until the end of the enclosing \(\mathtt{R~ } \{ \}\) will be captured and stored in the continuation \(k\), and the original program is replaced with the body of the \(\mathtt {shift} \). The enclosing \(\mathtt{R~ } \{ \}\) guarantees that we capture only the statements up until the end of the current task, thus providing a facility to proceed with the execution of the task later. The body of the \(\mathtt {shift} \) statement simply takes the captured continuation, \(k\), and adds it to the global task set, with the appropriate guard expression. After adding the continuation to the task set, the control passes back to the scheduler.

Procedures in \({\mathrm{env }}_F\) will get translated into two different procedures for synchronous and asynchronous calls, as follows:
$$\begin{aligned} f_S&\mapsto [\![S ]\!]\\ f_A&\mapsto \mathtt{reset~ } \{ \mathtt{shift~ } k ~ \{ T := \mathtt{add } ~ T ~ \mathtt {true} ~ k \}; [\![S ]\!] \} \end{aligned}$$
When making an asynchronous call, the body of the procedure will be immediately captured in a continuation, added to the global task set, and the control passes back to the invoker via the usual synchronous call mechanism.
The entry point of a program in the source language, main, is a regular procedure and will get translated according to the usual rules into two procedures, \(\mathsf {main} _A\) and \(\mathsf {main} _S\). In the target language, we must invoke the scheduler, and thus we use a different entry point:
$$\begin{aligned} T := \emptyset ; \mathsf {main} _A (); Sched \end{aligned}$$
After initializing the task set to be empty, the first statement will add an asynchronous call to the original entry point of the program, and passes control to the scheduler. As there is only one task in the task set – the task that will invoke the original entry point – the scheduler will immediately proceed with that.

5 Correctness

In this section, we prove that our compilation scheme is correct in the sense that it preserves the operational behavior from the source program into the (compiled) target program. Specifically, we prove that reductions in the source language are simulated by corresponding reductions in the target language. To do so, we extend the compilation scheme to configurations in Figure 10.
Fig. 10

Compilation of configurations

The compilation scheme for configurations follows the idea of the compilation scheme detailed in Section 4. We have two compilation functions: \([\![\cdot ]\!]_2\), which generates a task set from \(\varTheta \), and \([\![\cdot ]\!]\), which generates a configuration in the target semantics.

Every suspended task in the task set \(\varTheta \) is compiled to a pair consisting of the compiled guard expression and a continuation that has been constructed from the original statement and stack. The statement is wrapped in a \(\mathtt{R~ } \{ \}\) block and we prepend a \(\mathtt {skip} \) statement, just as it would happen when a continuation is captured in the target language.

If the active task is finished or is suspended (but no new task has been scheduled yet), the generated configuration will immediately contain the scheduler. If the task has suspended, the task is compiled according to the previously described scheme and appended to \(T\). Active tasks are wrapped in two \(\mathtt{R~ } \{ \}\) blocks and the stack \(\varPi \) is concatenated on top of the local state of the scheduler.

When the scheduler invokes a continuation \(k\), the continuation will stay in the local state of the scheduler until control comes back to the scheduler. This is unnecessary, as the value is never used after it has been invoked; furthermore, the variable is immediately assigned a new value after control passes back to the scheduler. Thus, as an optimization, we may switch to an alternative reduction rule for \(\mathtt{invoke } ~k\), which only allows a continuation to be used once, T-InvokeOnce, shown in Figure 11. Although the behavior of the program is equivalent under both versions, using the one-shot version also allows us to state the correctness theorem in a more concise and straightforward manner, as the local state of the scheduler will always be empty when we are currently executing some task. In the proof, we assume this rule to be used instead of the original T-Invoke rule.
Fig. 11

Alternative rule for \(\mathtt {invoke} \)

The following lemma states that the compilation of statements is compositional with respect to evaluation contexts, where evaluation contexts for the source language are defined inductively by
$$\begin{aligned} K := [] \mid K; S. \end{aligned}$$

Lemma 1.

$$\begin{aligned}{}[\![K[S]]\!]= [\![K ]\!]\big [[\![S ]\!]\big ] \end{aligned}$$


By induction on the structure of \(K\). \(\square \)

The correctness theorem below states that a one-step reduction in the source language is simulated by multiple-step reductions in the target language.

As an example, in Figure 12 we show the compiled form for both the initial and final configurations shown for the step in Figure 4 and in Figure 13, we show how to reach the compiled equivalent of the configuration in multiple steps in the target semantics.
Fig. 12

Example of compiling a configuration

Fig. 13

Reduction of the compiled configuration

Theorem 1.

For all configurations \({ cfg }_S\) and \({ cfg }_S'\) such that
$$\begin{aligned} {\mathrm{env }}_F \vdash { cfg }_S \rightarrow { cfg }_S' \end{aligned}$$
holds, then the following must also hold:
$$\begin{aligned}{}[\![{\mathrm{env }}_F ]\!]\vdash [\![{ cfg }_S ]\!]\mapsto ^+ [\![{ cfg }_S' ]\!]. \end{aligned}$$


By induction over the derivation, analyzing the step taken. The possible steps have one of the following forms:
  • Case
    $$\begin{aligned} {\mathrm{env }}_F \vdash n, \sigma \triangleright n\langle S, \varPi \rangle \parallel \varTheta \rightarrow n', \sigma ' \triangleright n\langle \varPi ' \rangle \parallel \varTheta ' \end{aligned}$$
    Rules matching this pattern are S-Assign-Local, S-Assign-Global, S-While-False, S-Spawn, S-Return, S-Var. As a representative example, we will look at S-Spawn in detail.
    $$\begin{aligned} \frac{{\mathrm{env }}_F~f = (\overline{x}, S) \quad \overline{v} = [\![\overline{e} ]\!]_{(\rho , \sigma )}\quad n' \text { is fresh}}{{\mathrm{env }}_F \vdash n, \sigma \triangleright n\langle \mathtt{spawn } ~ f ~ \overline{e}, \rho ; \varPi \rangle \rightarrow n, \sigma \triangleright n\langle \rho ; \varPi \rangle \parallel n'\langle \mathtt {true} , S, [\overline{x} \mapsto \overline{v}] \rangle } \end{aligned}$$
    In this case, the source and target configurations are compiled to:
    $$\begin{aligned}{}[\![{ cfg }_S ]\!]&= \langle \mathtt{R~ } \{ \mathtt{R~ } \{ f_A(\overline{e}) \}; \mathtt {return} \}; Sched , \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2]\rangle \\ [\![{ cfg }_S' ]\!]&= \langle Sched , \emptyset , \sigma [T \mapsto [\![\varTheta ]\!]_2 \cup \{(\mathtt {true} , [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ])\}]\rangle \end{aligned}$$
    Let the bottommost element of \(\varPi \) be \(\rho '\), where \(\rho = \rho '\) if \(\varPi \) is empty. The compiled source configuration will reduce as follows:
    $$\begin{aligned}&[\![{\mathrm{env }}_F ]\!]\vdash \langle \mathtt{R~ } \{ \mathtt{R~ } \{ f_A(\overline{e}) \}; \mathtt {return} \}; Sched , \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt{reset~ } \{ \mathtt{shift~ } k ~ \{ T := \mathtt{add } ~ T ~ \mathtt {true} ~ k \}; [\![S ]\!] \}; \mathtt {return} \}; \mathtt {return} \}; Sched , \\&\qquad \qquad [\overline{x} \mapsto \overline{v}]; \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt{shift~ } k ~ \{ T := \mathtt{add } ~ T ~ \mathtt {true} ~ k \}; [\![S ]\!] \}; \mathtt {return} \}; \mathtt {return} \}; Sched , \\&\qquad \qquad [\overline{x} \mapsto \overline{v}]\dagger ; \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt{R~ } \{ T := \mathtt{add } ~ T ~ \mathtt {true} ~ k \}; \mathtt {return} \}; \mathtt {return} \}; Sched , \\&\qquad \qquad [\overline{x} \mapsto \overline{v}, k \mapsto [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ]\dagger ; \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt {skip} \}; \mathtt {return} \}; \mathtt {return} \}; Sched , [\overline{x} \mapsto \overline{v}, k \mapsto [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ]\dagger ; \rho ; \varPi \dagger ; \emptyset \dagger , \qquad \qquad \sigma [T \mapsto [\![\varTheta ]\!]_2 \cup \{(\mathtt {true} , [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ])\}]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt {return} \}; \mathtt {return} \}; Sched , [\overline{x} \mapsto \overline{v}, k \mapsto [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ]; \rho ; \varPi \dagger ; \emptyset \dagger , MYAMP]\qquad \qquad \sigma [T \mapsto [\![\varTheta ]\!]_2 \cup \{(\mathtt {true} , [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ])\}]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt {skip} \}; \mathtt {return} \}; Sched , \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2 \cup \{(\mathtt {true} , [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ])\}]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt {return} \}; Sched , \rho '; \emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2 \cup \{(\mathtt {true} , [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ])\}]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt {skip} \}; Sched , \emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2 \cup \{(\mathtt {true} , [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ])\}]\rangle \\&\mapsto \langle Sched , \emptyset , \sigma [T \mapsto [\![\varTheta ]\!]_2 \cup \{(\mathtt {true} , [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ])\}]\rangle \\ \end{aligned}$$
    The configuration we obtain from evaluation is exactly equal to the compiled configuration, thus for this case our claim holds.
  • Case
    $$\begin{aligned} {\mathrm{env }}_F \vdash n, \sigma \triangleright n\langle S, \varPi \rangle \parallel \varTheta \rightarrow n', \sigma ' \triangleright n\langle e, S', \varPi ' \rangle \parallel \varTheta ' \end{aligned}$$
    There are only two possible rules: S-Seq-Grd and S-Await. In both cases, it must be that \(\sigma = \sigma '\), \(\varPi = \varPi '\), \(\varTheta \equiv \varTheta '\) and there exists some \(K\) such that \(S = K[\mathtt {await}~ e]\) and \(S' = K [\mathtt {skip} ]\). Therefore, taking into account Lemma 1, the source and target configurations are compiled to:
    $$\begin{aligned}{}[\![{ cfg }_S ]\!]&= \langle \mathtt{R~ } \{ \mathtt{R~ } \{ [\![K ]\!][[\![\mathtt {await}~ e ]\!]] \}; \mathtt {return} \}; Sched , \varPi \dagger ;\emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2]\rangle \\&= \langle \mathtt{R~ } \{ \mathtt{R~ } \{ [\![K ]\!][\mathtt{shift~ } k ~ \{ T := \mathtt{add } ~ T ~ [\![e ]\!] ~ k \}; \mathtt {skip} ] \}; \mathtt {return} \}; Sched , \varPi \dagger ;\emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2]\rangle \\ [\![{ cfg }_S' ]\!]&= \langle Sched , \emptyset , \sigma [T \mapsto [\![\varTheta ]\!]_2 \cup \{([\![e ]\!], \mathtt{R~ } \{ [\![K ]\!][ \mathtt {skip} ] \}, \varPi \dagger )\}\rangle \end{aligned}$$
    An example of this reduction can be seen in Figure 13.
  • Case
    $$\begin{aligned} {\mathrm{env }}_F \vdash n, \sigma \triangleright n\langle S, \varPi \rangle \parallel \varTheta \rightarrow n', \sigma ' \triangleright n\langle S', \varPi ' \rangle \parallel \varTheta ' \end{aligned}$$
    Rules matching this pattern are S-Seq-Fin, S-Seq-Step, S-If-True, S-If-False, S-While-True, S-Call. In the case of S-Seq-Step, we know that \(S = S_0;S_1\) and \(S' = S_0'; S_1\). By induction hypothesis, we get that
    $$\begin{aligned}{}[\![{\mathrm{env }}_F ]\!]\vdash [\![n, \sigma \triangleright n\langle S_0, \varPi \rangle \parallel \varTheta ]\!]\rightarrow [\![n', \sigma ' \triangleright n\langle S_0', \varPi ' \rangle \parallel \varTheta ' ]\!]\end{aligned}$$
    As by the definition of the compilation function \([\![S ]\!]= [\![S_0 ]\!]; [\![S_1 ]\!]\) and \([\![S' ]\!]= [\![S_0' ]\!]; [\![S_1 ]\!]\), we obtain the needed result:
    $$\begin{aligned}{}[\![{\mathrm{env }}_F ]\!]\vdash [\![n, \sigma \triangleright n\langle S_0; S_1, \varPi \rangle \parallel \varTheta ]\!]\rightarrow [\![n', \sigma ' \triangleright n\langle S_0'; S_1, \varPi ' \rangle \parallel \varTheta ' ]\!]\end{aligned}$$
    For S-Seq-Fin, we know that \(S = S_0; S_1\) and \(S' = S_1\). Then the case follows by analyzing the step taken to reduce \(S_0\).

    The other cases are straightforward.

  • One of the following three:
    $$\begin{aligned} {\mathrm{env }}_F \vdash n, \sigma \triangleright n\langle \varPi '\rangle \parallel n' \langle e, S, \varPi \rangle \parallel \varTheta&\rightarrow n', \sigma \triangleright n\langle \varPi '\rangle \parallel n'\langle S, \varPi \rangle \parallel \varTheta \\ {\mathrm{env }}_F \vdash n, \sigma \triangleright n\langle e', S', \varPi '\rangle \parallel n' \langle e, S, \varPi \rangle \parallel \varTheta&\rightarrow n', \sigma \triangleright n\langle e', S', \varPi ' \rangle \parallel n'\langle S, \varPi \rangle \parallel \varTheta \\ {\mathrm{env }}_F \vdash n, \sigma \triangleright n\langle e, S, \varPi \rangle \parallel \varTheta&\rightarrow n, \sigma \triangleright n\langle S, \varPi \rangle \parallel \varTheta \\ \end{aligned}$$
    These three patterns match each of the scheduling rules. We will look only at the first one.
    $$\begin{aligned} \frac{(\rho , \sigma ) \models e}{{\mathrm{env }}_F \vdash n, \sigma \triangleright n\langle \varPi ' \rangle \parallel n'\langle e, S, \rho ; \varPi \rangle \rightarrow n', \sigma \triangleright n'\langle S, \rho ; \varPi \rangle \parallel n\langle \varPi ' \rangle }\textsc {S-Sched-Fin} \end{aligned}$$
    $$\begin{aligned}{}[\![{ cfg }_S ]\!]&= \langle Sched , \emptyset , \sigma [T \mapsto \{([\![e ]\!], [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, \rho ; \varPi \dagger ])\}]\rangle \\ [\![{ cfg }_S' ]\!]&= \langle \mathtt{R~ } \{ \mathtt{R~ } \{ [\![S ]\!] \}; \mathtt {return} \}; Sched , \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto \emptyset ]\rangle \\ \end{aligned}$$
    The initial configuration will reduce as (with some of the steps omitted):
    $$\begin{aligned}&[\![{\mathrm{env }}_F ]\!]\vdash \langle \mathtt {While} {(\lnot \mathtt{isEmpty } ~ T)}\mathtt {do} {\mathtt{reset~ } \{ k := \mathtt{get } ~ T; T := \mathtt{del } ~ T ~ k; \mathtt{invoke } ~k \}},\\&\qquad \qquad \emptyset , \sigma [T \mapsto \{([\![e ]\!], [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, \rho ; \varPi \dagger ])\}]\rangle \\&\mapsto \langle \mathtt{reset~ } \{ k := \mathtt{get } ~ T; T := \mathtt{del } ~ T ~ k; \mathtt{invoke } ~k \}; {Sched}, \\&\qquad \qquad \emptyset , \sigma [T \mapsto \{([\![e ]\!], [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, \rho ; \varPi \dagger ])\}]\rangle \\&\mapsto \langle \mathtt{R~ } \{ k := \mathtt{get } ~ T; T := \mathtt{del } ~ T ~ k; \mathtt{invoke } ~k \}; {Sched}, \emptyset \dagger , \sigma [T \mapsto \{([\![e ]\!], [\mathtt {skip} ; \mathtt{R~ } \{ [\![S ]\!] \}, \rho ; \varPi \dagger ])\}]\rangle \\&\mapsto ^* \langle \mathtt{R~ } \{ \mathtt{invoke } ~k \}; , [k \mapsto [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, \rho ; \varPi \dagger ]] \dagger , \sigma [T \mapsto \emptyset ]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}; \mathtt {return} \}; {Sched}, \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto \emptyset ]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt{R~ } \{ [\![S ]\!] \}; \mathtt {return} \}; {Sched}, \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto \emptyset ]\rangle \end{aligned}$$

6 Conclusion

In this paper, we formalized a compilation scheme for cooperative multi-tasking into delimited continuations. For the source language, we extend While with procedure calls and operations for blocking and creation of new tasks. The target language extends While with shift/reset—the target language is sequential. We then proved that the compilation scheme is correct: reductions in the source language are simulated by corresponding reductions in the target language. We have implemented this compilation scheme in our compiler from ABS to Scala. The compiler covers a much richer language than our source language, including object-oriented features, and employs the experimental continuations plugin for Scala. The compiler is integrated into the wider ABS Tool Suite, available at We are currently formalizing the results of the paper in the proof assistant Agda.


  1. 1.

    In ABS, different tasks originating from the same object may communicate with each other via fields of the object.



This research was supported by the EU FP7 ICT project no. 231620 (HATS), the Estonian Centre of Excellence in Computer Science, EXCS, financed mainly by the European Regional Development Fund, ERDF, the Estonian Ministry of Education and Research target-financed research theme no. 0140007s12, and the Estonian Science Foundation grant no. 9398.


  1. Adya, A., Howell, J., Theimer, M., Bolosky, W.J., Douceur, J.R.: Cooperative task management without manual stack management. In: ATEC 2002: Proceedings of the General Track of the Annual Conference on USENIX Annual Technical Conference, pp. 289–302. USENIX (2002)Google Scholar
  2. Agha, G.: Actors: a model of concurrent computation in distributed systems. MIT Press (1986)Google Scholar
  3. de Boer, F.S., Clarke, D., Johnsen, E.B.: A complete guide to the future. In: De Nicola, R. (ed.) ESOP 2007. LNCS, vol. 4421, pp. 316–330. Springer, Heidelberg (2007)Google Scholar
  4. Caromel, D., Henrio, L., Serpette, B.P.: Asynchronous and deterministic objects. ACM SIGPLAN Notices - POPL 2004 39(1), 123–134 (2004)Google Scholar
  5. Danvy, O., Filinski, A.: Abstracting control. In: LFP 1990: Proceedings of the 1990 ACM Conference on LISP and Functional Programming, pp. 151–160. ACM (1990)Google Scholar
  6. Haynes, C.T., Friedman, D.P., Wand, M.: Continuations and coroutines. In: LFP 1984: Proceedings of the 1984 ACM Symposium on LISP and Functional Programming, pp. 293–298. ACM (1984)Google Scholar
  7. Johnsen, E.B., Hähnle, R., Schäfer, J., Schlatte, R., Steffen, M.: ABS: A core language for abstract behavioral specification. In: Aichernig, B.K., de Boer, F.S., Bonsangue, M.M. (eds.) FMCO2010. LNCS, vol. 6957, pp. 142–164. Springer, Heidelberg (2011)Google Scholar
  8. Johnsen, E.B., Owe, O., Yu, I.C.: Creol: A type-safe object-oriented model for distributed concurrent systems. Theoretical Computer Science 365(1-2), 23–66 (2006)Google Scholar
  9. Karmani, R.K., Shali, A., Agha, G.: Actor frameworks for the JVM platform: a comparative analysis. In: PPPJ 1909: Proceedings of the 7th International Conference on Principles and Practice of Programming in Java, pp. 11–20. ACM (2009)Google Scholar
  10. Schäfer, J., Poetzsch-Heffter, A.: JCoBox: Generalizing active objects to concurrent components. In: D’Hondt, T. (ed.) ECOOP 2010. LNCS, vol. 6183, pp. 275–299. Springer, Heidelberg (2010)Google Scholar
  11. Srinivasan, S., Mycroft, A.: Kilim: Isolation-typed actors for Java. In: Vitek, J. (ed.) ECOOP 2008. LNCS, vol. 5142, pp. 104–128. Springer, Heidelberg (2008)Google Scholar
  12. Wand, M.: Continuation-based multiprocessing. In: LFP 1980: Proceedings of the 1980 ACM Conference on LISP and Functional Programming, pp. 19–28. ACM (1980)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2013

Authors and Affiliations

  1. 1.Institute of Cybernetics at Tallinn University of TechnologyTallinnEstonia

Personalised recommendations