# Compiling Cooperative Task Management to Continuations

## Abstract.

Although preemptive concurrency models are dominant for multi-threaded concurrency, they may be criticized for the complexity of reasoning because of the implicit context switches. The actor model and cooperative concurrency models have regained attention as they encapsulate the thread of control. In this paper, we formalize a continuation-based compilation of cooperative multitasking for a simple language and prove its correctness.

## 1 Introduction

In a preemptive concurrency model, threads may be suspended and activated at any time. While preemptive models are dominant for multi-threaded concurrency, they may be criticized for the complexity of reasoning because of the implicit context switches. The programmer often has to resort to low-level synchronization primitives, such as locks, to prevent unwanted context switches. Programs written in such a way tend to be error-prone and are not scalable. The actor model [2] addresses this issue. Actors encapsulate the thread of control and communicate with each other by sending messages. They are also able to call blocking operations such as sleep, await and receive, reminiscent of cooperative multi-tasking. Erlang and Scala actors support actor-based concurrency models.

Creol [8] and ABS [7] combine a message-passing concurrency model and a cooperative concurrency model. In Creol, each object encapsulates a thread of control and objects communicate with each other using asynchronous method calls. Asynchronous method calls, instead of messages-passing, provide a type-safe communication mechanism and are a good match for object-oriented languages [3, 4]. ABS generalizes the concurrency model of Creol by introducing *concurrent object groups* [10] as the unit of concurrency. The concurrency model of ABS can be split in two: in one layer, we have local, synchronous and shared-memory communication^{1} in one concurrent object group (COG) and on the second layer we have asynchronous message-based concurrency between different concurrent object groups as in Creol. The behavior of one COG is based on the cooperative multitasking of external method invocations and internal method activations, with concrete scheduling points where a different task may get scheduled. Between different COGs only asynchronous method calls may be used; different COGs have no shared object heap. The order of execution of asynchronous method calls is not specified. The result of an asynchronous call is a future; callers may decide at run-time when to synchronize with the reply from a call. Asynchronous calls may be seen as triggers that spawn new method activations (or tasks) within objects. Every object has a set of tasks that are to be executed (originating from method calls). Among these, at most one task of all the objects belonging to one COG is active; others are suspended and awaiting execution. The concurrency models of Creol and ABS are designed to be suitable in the distributed setting, where one COG executes on its own (virtual) processor in a single node and different COGs may be executed on different nodes in the network.

In this paper, we are interested in the compilation of cooperative multi-tasking into continuations, motivated to execute a cooperative multi-tasking model on the JVM platform, which employs a preemptive model. The basic idea of using continuations to manage the control behavior of the computation has been known from 80’s [6, 12], and is still considered as a viable technique [1, 9, 11]. This is particularly so, if the programming language supports first-class continuations, as in the case of Scala, and hence one can obviate manual stack management. The contribution of the paper is a correctness proof of such a compilation scheme. Namely, we create a simplified source language, by extending the While language with (synchronous) procedure calls and operations for cooperative multi-tasking (i.e., blocking operations and creation of new tasks) and define a compilation function from the source language into the target language, which extends While with continuation operations—the target language is sequential. We then prove that the compilation preserves the operational behavior from the source language to the target language.

## 2 Source Language

In Figure 1, we define the syntax for the source language. We use the overline notation to denote sequences, with \(\epsilon \) denoting an empty sequence. It is the While language extended with (local) variable definitions, \(\mathtt {var}~ x = e\), procedure calls \(f(\overline{e})\), the await statement \(\mathtt {await}~ e\), creation of a new task \(\mathtt{spawn } ~ f ~ \overline{e}\), and the return statement \(\mathtt {return} \). For simplicity, we syntactically distinguish local variable assignment \(x := e\) and global variable assignment \(u := e\). The statement \(\mathtt {var}~ x = e\) defines a (task) local variable \(x\) and initializes it with the value of \(e\). The statement \(f(\overline{e})\) invokes the procedure \(f\) with arguments \(\overline{e}\), to be executed within the same task. The procedure does not return the result to the caller, but may store the result in a global variable. The statement \(\mathtt {await}~ e\) suspends the execution of the current task, which can be resumed when the guard expression \(e\) evaluates to true. The statement \(\mathtt{spawn } ~ f ~ \overline{e}\) spawns a new task, which executes the body of \(f\) with arguments \(\overline{e}\). (Hence, in contrast to procedure calls, which are synchronous, \(\mathtt{spawn } ~ f ~ \overline{e}\) is like an asynchronous procedure call executed in a new task.) The \(\mathtt {return} \) statement is a runtime construct, not appearing in the source program, and will be explained later.

We assume disjoint supplies of local variables (ranged over by \(x\)), global variables (ranged over by \(u\)), and procedure names (ranged over by \(f\)). We assume a set of (pure) expressions, whose elements are ranged over by \(e\). We assume the set of values to be the integers, non-zero integers counting as truth and zero as falsity. The metavariable \(v\) ranges over values. We have two kinds of states – local states and global states. A local state, ranged over by \(\rho \), maps local variables to values; a global state, ranged over by \(\sigma \), maps global variables to values. We denotes by \(\emptyset \) an empty mapping, whose domain is empty. Communication between different tasks is achieved via global variables. For simplicity, we assume a fixed set of global variables. The notation \(\rho [x\mapsto v]\) denotes the update of \(\rho \) with \(v\) at \(x\), when \(x\) is in the domain of \(\rho \). If \(x\) is not in the domain, it denotes a mapping extension. The notation \(\sigma [u\mapsto v]\) denotes similar. We assume given an evaluation function \([\![e ]\!]_ {(\rho ,\sigma )}\), which evaluates \(e\) in the local state \(\rho \) and the global state \(\sigma \). We write \((\rho ,\sigma ) \models e\) and \((\rho ,\sigma ) \not \models e\) to denote that \(e\) is true, resp. false with respect to \(\rho \) and \(\sigma \). A *stack* \(\varPi \) is a non-empty list of local states, whose elements are separated by semicolons. A stack grows leftward, i.e., the leftmost element is the topmost element.

A program \(P\) consists of a procedure environment \({\mathrm{env }}_F\) which maps procedure names to pairs of a formal argument list and a statement, and a global state which maps global variables to their initial values. The entry point of the program will be the procedure named main.

*configuration*s, in the style of structural operational semantics. A configuration \({ cfg }\) consists of an active task identifier \(n\), a global variable mapping \(\sigma \) and a set of tasks \(\varTheta \). A task has an identifier and may be in one of the three forms: a triple \(\langle e, S, \varPi \rangle \), representing a task that is awaiting to be scheduled, where \(e\) is the guard expression, \(S\) the statement and \(\varPi \) its stack; or, a pair \(\langle S, \varPi \rangle \), representing the currently active task; or, a singleton \(\langle \varPi \rangle \), representing a terminated task.

Transition rules in the semantics are in the form \({\mathrm{env }}_F \vdash { cfg }\rightarrow { cfg }'\), shown in Figure 2. The first two rules (S-Cong and S-Equiv) deal with congruence and structural equivalence. The rules for assignment, \(\mathtt {skip} \), if-then-else and while are self-explanatory. For instance, in the rule S-Assign-Local, the task is of the form \(n\langle S, \varPi '\rangle \) where \(S = x:= e\) and \(\varPi ' = \rho ;\varPi \). Note that the topmost element of the stack \(\varPi \) is the current local state. The rules for sequential composition may deserve some explanation. If the first statement \(S_1\) suspends guarded by \(e\) in the stack \(\varPi '\) with the residual statement \(S'_1\) to be run when resumed, then the entire statement \(S_1;S_2\) suspends in \(\langle e, S'_1;S_2, \varPi '\rangle \), where the residual statement now contains the second statement \(S_2\) (S-Seq-Grd). If \(S_1\) terminates in \(\varPi '\), then \(S_2\) will run next in \(\varPi '\) (S-Seq-Fin). Otherwise, \(S_1\) transfers to \(S'_1\) with the stack \(\varPi '\), so that \(S_1;S_2\) transfers to \(S'_1;S_2\) with the same stack (S-Seq-Step). The await statement immediately suspends (S-Await) the currently active task, enabling us to switch to some other task in accordance to the scheduling rules. An example of the await statement (and the scheduling rules) at work can be found in the example in Figure 3. The statement \(\mathtt{spawn } ~ f ~ \overline{e}\) creates a new task \(n'\langle \mathtt {true} , S, [\overline{x} \mapsto \overline{v}]\rangle \) with \(n'\) a fresh identifier (S-Spawn). The caller task continues to be active. The newly created task is suspended, guarded by \(\mathtt {true} \), and may get scheduled at scheduling points by the scheduling rules (see below). Procedure invocation \(f(\overline{e})\) evaluates the arguments \(\overline{e}\) in the current state, pushes into the stack the local state \([\overline{x} \mapsto \overline{v}]\), mapping the formal parameters to the actual arguments, and transfers to \(S;\mathtt {return} \), where \(S\) is the body of \(f\) (S-Call). The \(\mathtt {return} \) statement pops the topmost element from the stack (S-Return). The local variable definition \(\mathtt {var}~ x = e\) extends the current local state with the newly defined variable and initializes it with the value of \(e\) (S-Var).

## 3 Target Language

*continuations*\([S, \varPi ]\), which are pairs of a statement \(S\) and a stack \(\varPi \), and support for

*guarded (multi)sets*: collections which contain pairs of an expression and a value. The expression stored with each element is called a

*guard expression*, and is evaluated when we query the set: only elements whose guard expressions hold may be returned. There are five expressions in the language to work with guarded sets: an empty set \(\emptyset \), checking whether a set is empty (\(\mathtt{isEmpty } ~ e_s\)), adding an element (\(\mathtt{add } ~ e_s ~ e_g ~ e\)), fetching an element (\(\mathtt{get } ~ e_s\)) and removing an element (\(\mathtt{del } ~ e_s ~ e\)).

Similar to the source language, for the target language we extend While with local variable definitions and procedure calls. We also add delimited control operators, \(\mathtt{shift~ } k ~ \{ S \}\), \(\mathtt{reset~ } \{ S \}\), \(\mathtt{invoke } ~k\) [5]. The statement \(\mathtt{shift~ } k ~ \{ S \}\) captures the rest of the computation, or continuation, up to the closest surrounding \(\mathtt{reset~ } \{ \}\), binds it to \(k\), and proceeds to execute \(S\). In \(\mathtt{shift~ } k ~ \{ S \}\), \(k\) is a binding occurrence, whose scope is \(S\). Hence, the statement \(\mathtt{reset~ } \{ S \}\) delimits the captured continuation. The statement \(\mathtt{invoke } ~k\) invokes, or jumps into, the continuation bound to the variable \(k\). The statement \(\mathtt{R~ } \{ S \}\), where \(\mathtt {R} \) is a new constant, is a runtime construct to be explained later.

The target language is sequential and, unlike the source language, it contains no explicit support for parallelism. Instead, we provide building blocks – continuations and guarded sets – that are used to switch between tasks and implement an explicit scheduler in Section 4.

We define an operational semantics for the target language as a reduction semantics over configurations, using evaluation contexts. A configuration \(\langle S, \varPi , \sigma \rangle \) is a triple of a statement \(S\), a stack \(\varPi \) and a global state \(\sigma \).

Procedure call \(f (\overline{e})\) reduces to \(S; \mathtt {return} \) where \(S\) is the body of the procedure \(f\), and pushes a local state \([\overline{x} \mapsto \overline{v}]\), binding procedure’s formal arguments to actual arguments, into the stack (T-Call). The trailing \(\mathtt {return} \) ensures that, once the execution of \(S; \mathtt {return} \) terminates, the stack is aligned to the original \(\varPi \). \(\mathtt {return} \) pops the topmost element from the stack (T-Return). The remaining rules are self-explanatory.

## 4 Compilation

*scheduler*(for brevity, in the examples we use \( Sched \) to denote the scheduler), shown in Figure 9. The control will pass to the scheduler every time a task either suspends or finishes, and the scheduler will pick up a new task to execute. During runtime, we also use a global variable \(T\), which we assume not to be used by the program to be compiled. The global variable \(T\) stores the

*task set*, corresponding to \(\varTheta \) in the source semantics, that contains all the tasks in the system. The tasks are stored as continuations with guard expressions.

The scheduler loops until the task set is empty (all tasks have terminated), in each iteration picking a continuation from \(T\) where the guard expression evaluates to \(\mathtt {true} \), removing it from \(T\) and then invoking the continuation using \(\mathtt{invoke } ~k\). The body of the scheduler is wrapped in a \(\mathtt {reset} \), guaranteeing that when a task suspends, the capture will be limited to the end of the current task. After the execution is completed – either by suspension or by just finishing the work – the control comes back to the scheduler.

Suspension (\(\mathtt {await}~ e\) in the source language) is compiled to a \(\mathtt {shift} \) statement. When evaluating the statement, the original computation until the end of the enclosing \(\mathtt{R~ } \{ \}\) will be captured and stored in the continuation \(k\), and the original program is replaced with the body of the \(\mathtt {shift} \). The enclosing \(\mathtt{R~ } \{ \}\) guarantees that we capture only the statements up until the end of the current task, thus providing a facility to proceed with the execution of the task later. The body of the \(\mathtt {shift} \) statement simply takes the captured continuation, \(k\), and adds it to the global task set, with the appropriate guard expression. After adding the continuation to the task set, the control passes back to the scheduler.

## 5 Correctness

The compilation scheme for configurations follows the idea of the compilation scheme detailed in Section 4. We have two compilation functions: \([\![\cdot ]\!]_2\), which generates a task set from \(\varTheta \), and \([\![\cdot ]\!]\), which generates a configuration in the target semantics.

Every suspended task in the task set \(\varTheta \) is compiled to a pair consisting of the compiled guard expression and a continuation that has been constructed from the original statement and stack. The statement is wrapped in a \(\mathtt{R~ } \{ \}\) block and we prepend a \(\mathtt {skip} \) statement, just as it would happen when a continuation is captured in the target language.

If the active task is finished or is suspended (but no new task has been scheduled yet), the generated configuration will immediately contain the scheduler. If the task has suspended, the task is compiled according to the previously described scheme and appended to \(T\). Active tasks are wrapped in two \(\mathtt{R~ } \{ \}\) blocks and the stack \(\varPi \) is concatenated on top of the local state of the scheduler.

**Lemma 1.**

**Proof.**

By induction on the structure of \(K\). \(\square \)

The correctness theorem below states that a one-step reduction in the source language is simulated by multiple-step reductions in the target language.

**Theorem 1.**

**Proof.**

- CaseRules matching this pattern are S-Assign-Local, S-Assign-Global, S-While-False, S-Spawn, S-Return, S-Var. As a representative example, we will look at S-Spawn in detail.$$\begin{aligned} {\mathrm{env }}_F \vdash n, \sigma \triangleright n\langle S, \varPi \rangle \parallel \varTheta \rightarrow n', \sigma ' \triangleright n\langle \varPi ' \rangle \parallel \varTheta ' \end{aligned}$$In this case, the source and target configurations are compiled to:$$\begin{aligned} \frac{{\mathrm{env }}_F~f = (\overline{x}, S) \quad \overline{v} = [\![\overline{e} ]\!]_{(\rho , \sigma )}\quad n' \text { is fresh}}{{\mathrm{env }}_F \vdash n, \sigma \triangleright n\langle \mathtt{spawn } ~ f ~ \overline{e}, \rho ; \varPi \rangle \rightarrow n, \sigma \triangleright n\langle \rho ; \varPi \rangle \parallel n'\langle \mathtt {true} , S, [\overline{x} \mapsto \overline{v}] \rangle } \end{aligned}$$Let the bottommost element of \(\varPi \) be \(\rho '\), where \(\rho = \rho '\) if \(\varPi \) is empty. The compiled source configuration will reduce as follows:$$\begin{aligned}{}[\![{ cfg }_S ]\!]&= \langle \mathtt{R~ } \{ \mathtt{R~ } \{ f_A(\overline{e}) \}; \mathtt {return} \}; Sched , \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2]\rangle \\ [\![{ cfg }_S' ]\!]&= \langle Sched , \emptyset , \sigma [T \mapsto [\![\varTheta ]\!]_2 \cup \{(\mathtt {true} , [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ])\}]\rangle \end{aligned}$$The configuration we obtain from evaluation is exactly equal to the compiled configuration, thus for this case our claim holds.$$\begin{aligned}&[\![{\mathrm{env }}_F ]\!]\vdash \langle \mathtt{R~ } \{ \mathtt{R~ } \{ f_A(\overline{e}) \}; \mathtt {return} \}; Sched , \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt{reset~ } \{ \mathtt{shift~ } k ~ \{ T := \mathtt{add } ~ T ~ \mathtt {true} ~ k \}; [\![S ]\!] \}; \mathtt {return} \}; \mathtt {return} \}; Sched , \\&\qquad \qquad [\overline{x} \mapsto \overline{v}]; \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt{shift~ } k ~ \{ T := \mathtt{add } ~ T ~ \mathtt {true} ~ k \}; [\![S ]\!] \}; \mathtt {return} \}; \mathtt {return} \}; Sched , \\&\qquad \qquad [\overline{x} \mapsto \overline{v}]\dagger ; \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt{R~ } \{ T := \mathtt{add } ~ T ~ \mathtt {true} ~ k \}; \mathtt {return} \}; \mathtt {return} \}; Sched , \\&\qquad \qquad [\overline{x} \mapsto \overline{v}, k \mapsto [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ]\dagger ; \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt {skip} \}; \mathtt {return} \}; \mathtt {return} \}; Sched , [\overline{x} \mapsto \overline{v}, k \mapsto [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ]\dagger ; \rho ; \varPi \dagger ; \emptyset \dagger , \qquad \qquad \sigma [T \mapsto [\![\varTheta ]\!]_2 \cup \{(\mathtt {true} , [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ])\}]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt {return} \}; \mathtt {return} \}; Sched , [\overline{x} \mapsto \overline{v}, k \mapsto [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ]; \rho ; \varPi \dagger ; \emptyset \dagger , MYAMP]\qquad \qquad \sigma [T \mapsto [\![\varTheta ]\!]_2 \cup \{(\mathtt {true} , [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ])\}]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt {skip} \}; \mathtt {return} \}; Sched , \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2 \cup \{(\mathtt {true} , [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ])\}]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt {return} \}; Sched , \rho '; \emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2 \cup \{(\mathtt {true} , [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ])\}]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt {skip} \}; Sched , \emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2 \cup \{(\mathtt {true} , [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ])\}]\rangle \\&\mapsto \langle Sched , \emptyset , \sigma [T \mapsto [\![\varTheta ]\!]_2 \cup \{(\mathtt {true} , [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, [\overline{x} \mapsto \overline{v}]\dagger ])\}]\rangle \\ \end{aligned}$$
- CaseThere are only two possible rules: S-Seq-Grd and S-Await. In both cases, it must be that \(\sigma = \sigma '\), \(\varPi = \varPi '\), \(\varTheta \equiv \varTheta '\) and there exists some \(K\) such that \(S = K[\mathtt {await}~ e]\) and \(S' = K [\mathtt {skip} ]\). Therefore, taking into account Lemma 1, the source and target configurations are compiled to:$$\begin{aligned} {\mathrm{env }}_F \vdash n, \sigma \triangleright n\langle S, \varPi \rangle \parallel \varTheta \rightarrow n', \sigma ' \triangleright n\langle e, S', \varPi ' \rangle \parallel \varTheta ' \end{aligned}$$An example of this reduction can be seen in Figure 13.$$\begin{aligned}{}[\![{ cfg }_S ]\!]&= \langle \mathtt{R~ } \{ \mathtt{R~ } \{ [\![K ]\!][[\![\mathtt {await}~ e ]\!]] \}; \mathtt {return} \}; Sched , \varPi \dagger ;\emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2]\rangle \\&= \langle \mathtt{R~ } \{ \mathtt{R~ } \{ [\![K ]\!][\mathtt{shift~ } k ~ \{ T := \mathtt{add } ~ T ~ [\![e ]\!] ~ k \}; \mathtt {skip} ] \}; \mathtt {return} \}; Sched , \varPi \dagger ;\emptyset \dagger , \sigma [T \mapsto [\![\varTheta ]\!]_2]\rangle \\ [\![{ cfg }_S' ]\!]&= \langle Sched , \emptyset , \sigma [T \mapsto [\![\varTheta ]\!]_2 \cup \{([\![e ]\!], \mathtt{R~ } \{ [\![K ]\!][ \mathtt {skip} ] \}, \varPi \dagger )\}\rangle \end{aligned}$$
- CaseRules matching this pattern are S-Seq-Fin, S-Seq-Step, S-If-True, S-If-False, S-While-True, S-Call. In the case of S-Seq-Step, we know that \(S = S_0;S_1\) and \(S' = S_0'; S_1\). By induction hypothesis, we get that$$\begin{aligned} {\mathrm{env }}_F \vdash n, \sigma \triangleright n\langle S, \varPi \rangle \parallel \varTheta \rightarrow n', \sigma ' \triangleright n\langle S', \varPi ' \rangle \parallel \varTheta ' \end{aligned}$$As by the definition of the compilation function \([\![S ]\!]= [\![S_0 ]\!]; [\![S_1 ]\!]\) and \([\![S' ]\!]= [\![S_0' ]\!]; [\![S_1 ]\!]\), we obtain the needed result:$$\begin{aligned}{}[\![{\mathrm{env }}_F ]\!]\vdash [\![n, \sigma \triangleright n\langle S_0, \varPi \rangle \parallel \varTheta ]\!]\rightarrow [\![n', \sigma ' \triangleright n\langle S_0', \varPi ' \rangle \parallel \varTheta ' ]\!]\end{aligned}$$For S-Seq-Fin, we know that \(S = S_0; S_1\) and \(S' = S_1\). Then the case follows by analyzing the step taken to reduce \(S_0\).$$\begin{aligned}{}[\![{\mathrm{env }}_F ]\!]\vdash [\![n, \sigma \triangleright n\langle S_0; S_1, \varPi \rangle \parallel \varTheta ]\!]\rightarrow [\![n', \sigma ' \triangleright n\langle S_0'; S_1, \varPi ' \rangle \parallel \varTheta ' ]\!]\end{aligned}$$
The other cases are straightforward.

- One of the following three:These three patterns match each of the scheduling rules. We will look only at the first one.$$\begin{aligned} {\mathrm{env }}_F \vdash n, \sigma \triangleright n\langle \varPi '\rangle \parallel n' \langle e, S, \varPi \rangle \parallel \varTheta&\rightarrow n', \sigma \triangleright n\langle \varPi '\rangle \parallel n'\langle S, \varPi \rangle \parallel \varTheta \\ {\mathrm{env }}_F \vdash n, \sigma \triangleright n\langle e', S', \varPi '\rangle \parallel n' \langle e, S, \varPi \rangle \parallel \varTheta&\rightarrow n', \sigma \triangleright n\langle e', S', \varPi ' \rangle \parallel n'\langle S, \varPi \rangle \parallel \varTheta \\ {\mathrm{env }}_F \vdash n, \sigma \triangleright n\langle e, S, \varPi \rangle \parallel \varTheta&\rightarrow n, \sigma \triangleright n\langle S, \varPi \rangle \parallel \varTheta \\ \end{aligned}$$$$\begin{aligned} \frac{(\rho , \sigma ) \models e}{{\mathrm{env }}_F \vdash n, \sigma \triangleright n\langle \varPi ' \rangle \parallel n'\langle e, S, \rho ; \varPi \rangle \rightarrow n', \sigma \triangleright n'\langle S, \rho ; \varPi \rangle \parallel n\langle \varPi ' \rangle }\textsc {S-Sched-Fin} \end{aligned}$$The initial configuration will reduce as (with some of the steps omitted):$$\begin{aligned}{}[\![{ cfg }_S ]\!]&= \langle Sched , \emptyset , \sigma [T \mapsto \{([\![e ]\!], [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, \rho ; \varPi \dagger ])\}]\rangle \\ [\![{ cfg }_S' ]\!]&= \langle \mathtt{R~ } \{ \mathtt{R~ } \{ [\![S ]\!] \}; \mathtt {return} \}; Sched , \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto \emptyset ]\rangle \\ \end{aligned}$$$$\begin{aligned}&[\![{\mathrm{env }}_F ]\!]\vdash \langle \mathtt {While} {(\lnot \mathtt{isEmpty } ~ T)}\mathtt {do} {\mathtt{reset~ } \{ k := \mathtt{get } ~ T; T := \mathtt{del } ~ T ~ k; \mathtt{invoke } ~k \}},\\&\qquad \qquad \emptyset , \sigma [T \mapsto \{([\![e ]\!], [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, \rho ; \varPi \dagger ])\}]\rangle \\&\mapsto \langle \mathtt{reset~ } \{ k := \mathtt{get } ~ T; T := \mathtt{del } ~ T ~ k; \mathtt{invoke } ~k \}; {Sched}, \\&\qquad \qquad \emptyset , \sigma [T \mapsto \{([\![e ]\!], [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, \rho ; \varPi \dagger ])\}]\rangle \\&\mapsto \langle \mathtt{R~ } \{ k := \mathtt{get } ~ T; T := \mathtt{del } ~ T ~ k; \mathtt{invoke } ~k \}; {Sched}, \emptyset \dagger , \sigma [T \mapsto \{([\![e ]\!], [\mathtt {skip} ; \mathtt{R~ } \{ [\![S ]\!] \}, \rho ; \varPi \dagger ])\}]\rangle \\&\mapsto ^* \langle \mathtt{R~ } \{ \mathtt{invoke } ~k \}; , [k \mapsto [\mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}, \rho ; \varPi \dagger ]] \dagger , \sigma [T \mapsto \emptyset ]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt{R~ } \{ \mathtt {skip} ; [\![S ]\!] \}; \mathtt {return} \}; {Sched}, \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto \emptyset ]\rangle \\&\mapsto \langle \mathtt{R~ } \{ \mathtt{R~ } \{ [\![S ]\!] \}; \mathtt {return} \}; {Sched}, \rho ; \varPi \dagger ; \emptyset \dagger , \sigma [T \mapsto \emptyset ]\rangle \end{aligned}$$

## 6 Conclusion

In this paper, we formalized a compilation scheme for cooperative multi-tasking into delimited continuations. For the source language, we extend While with procedure calls and operations for blocking and creation of new tasks. The target language extends While with shift/reset—the target language is sequential. We then proved that the compilation scheme is correct: reductions in the source language are simulated by corresponding reductions in the target language. We have implemented this compilation scheme in our compiler from ABS to Scala. The compiler covers a much richer language than our source language, including object-oriented features, and employs the experimental continuations plugin for Scala. The compiler is integrated into the wider ABS Tool Suite, available at http://tools.hats-project.eu/. We are currently formalizing the results of the paper in the proof assistant Agda.

## Footnotes

- 1.
In ABS, different tasks originating from the same object may communicate with each other via fields of the object.

## Notes

### Acknowledgments

This research was supported by the EU FP7 ICT project no. 231620 (HATS), the Estonian Centre of Excellence in Computer Science, EXCS, financed mainly by the European Regional Development Fund, ERDF, the Estonian Ministry of Education and Research target-financed research theme no. 0140007s12, and the Estonian Science Foundation grant no. 9398.

## References

- Adya, A., Howell, J., Theimer, M., Bolosky, W.J., Douceur, J.R.: Cooperative task management without manual stack management. In: ATEC 2002: Proceedings of the General Track of the Annual Conference on USENIX Annual Technical Conference, pp. 289–302. USENIX (2002)Google Scholar
- Agha, G.: Actors: a model of concurrent computation in distributed systems. MIT Press (1986)Google Scholar
- de Boer, F.S., Clarke, D., Johnsen, E.B.: A complete guide to the future. In: De Nicola, R. (ed.) ESOP 2007. LNCS, vol. 4421, pp. 316–330. Springer, Heidelberg (2007)Google Scholar
- Caromel, D., Henrio, L., Serpette, B.P.: Asynchronous and deterministic objects. ACM SIGPLAN Notices - POPL 2004 39(1), 123–134 (2004)Google Scholar
- Danvy, O., Filinski, A.: Abstracting control. In: LFP 1990: Proceedings of the 1990 ACM Conference on LISP and Functional Programming, pp. 151–160. ACM (1990)Google Scholar
- Haynes, C.T., Friedman, D.P., Wand, M.: Continuations and coroutines. In: LFP 1984: Proceedings of the 1984 ACM Symposium on LISP and Functional Programming, pp. 293–298. ACM (1984)Google Scholar
- Johnsen, E.B., Hähnle, R., Schäfer, J., Schlatte, R., Steffen, M.: ABS: A core language for abstract behavioral specification. In: Aichernig, B.K., de Boer, F.S., Bonsangue, M.M. (eds.) FMCO2010. LNCS, vol. 6957, pp. 142–164. Springer, Heidelberg (2011)Google Scholar
- Johnsen, E.B., Owe, O., Yu, I.C.: Creol: A type-safe object-oriented model for distributed concurrent systems. Theoretical Computer Science 365(1-2), 23–66 (2006)Google Scholar
- Karmani, R.K., Shali, A., Agha, G.: Actor frameworks for the JVM platform: a comparative analysis. In: PPPJ 1909: Proceedings of the 7th International Conference on Principles and Practice of Programming in Java, pp. 11–20. ACM (2009)Google Scholar
- Schäfer, J., Poetzsch-Heffter, A.: JCoBox: Generalizing active objects to concurrent components. In: D’Hondt, T. (ed.) ECOOP 2010. LNCS, vol. 6183, pp. 275–299. Springer, Heidelberg (2010)Google Scholar
- Srinivasan, S., Mycroft, A.: Kilim: Isolation-typed actors for Java. In: Vitek, J. (ed.) ECOOP 2008. LNCS, vol. 5142, pp. 104–128. Springer, Heidelberg (2008)Google Scholar
- Wand, M.: Continuation-based multiprocessing. In: LFP 1980: Proceedings of the 1980 ACM Conference on LISP and Functional Programming, pp. 19–28. ACM (1980)Google Scholar