Imperative process algebra and models of computation

Studies of issues related to computability and computational complexity involve the use of a model of computation. Pivotal to such a model are the computational processes considered. Processes of this kind can be described using an imperative process algebra based on ACP (Algebra of Communicating Processes). In this paper, it is investigated whether the imperative process algebra concerned can play a role in the field of models of computation.It is demonstrated that the process algebra is suitable to describe in a mathematically precise way models of computation corresponding to existing models based on sequential, asynchronous parallel, and synchronous parallel random access machines as well as time and work complexity measures for those models. A probabilistic variant of the model based on sequential random access machines and complexity measures for it are also described.


Introduction
A computational process is a process that solves a computational problem.A computational process is applied to a data environment that consists of data organized and accessible in a specific way.Well-known examples of data environments are the tapes found in Turing machines and the memories found in random access machines.The application of a computational process to a data environment yields another data environment.The data environment to which the process is applied represents an instance of the computational problem that is solved by the process and the data environment yielded by the application represents the solution of that instance.A computational process is divided into simple steps, each of which depends on and has an impact on only a small portion of the data environment to which the process is applied.
A basic assumption in this paper is that a model of computation is fully characterized by: (a) a set of possible computational processes, (b) for each possible computational process, a set of possible data environments, and (c) the effect of applying such processes to such environments.The set of possible computational processes is usually given indirectly, mostly by means of abstract machines that have a built-in program that belongs to a set of possible programs which is such that the possible computational processes are exactly the processes that are produced by those machines when they execute their built-in program.The abstract machines with their built-in programs emphasize the mechanical nature of the possible computational processes.However, in this way, details of how possible computational processes are produced become part of the model of computation.To the best of my knowledge, all definitions that have been given with respect to a model of computation and all results that have been proved with respect to a model of computation can do without reference to such details.
In [27], an extension of ACP [6] is presented whose additional features include assignment actions to change data in the course of a process, guarded commands to proceed at certain stages of a process in a way that depends on changing data, and data parameterized actions to communicate data between processes.The extension concerned is called ACP τ ǫ -I.The term imperative process algebra was coined in [28] for process algebras like ACP τ ǫ -I.In [27], it is discussed what qualities of ACP τ ǫ -I distinguish it from other imperative process algebras, how its distinguishing qualities are achieved, and what its relevance is to the verification of properties of processes carried out by contemporary computer-based systems.Moreover, that paper goes into one of the application areas of ACP τ ǫ -I, namely the area of information-flow security analysis.
The current paper studies whether ACP τ ǫ -I can play a role in the field of models of computation.The idea of this study originates from the experience that definitions of models of computation and results about them in the scientific literature tend to lack preciseness, in particular if it concerns models of parallel computation.The study takes for granted the basic assumption about the characterization of models of computation mentioned above.Moreover, it focuses on models of computation that are intended for investigation of issues related to computability and computational complexity.It does not consider models of computation geared to computation as it takes place on concrete computers or computer networks of a certain kind.Such models are left for follow-up studies.Outcomes of this study are among other things mathematically precise definitions of models of computation corresponding to models based on sequential random access machines, asynchronous parallel random access machines, synchronous parallel random access machines, and a probabilistic variant of sequential random access machines.
This paper is organized as follows.First, a survey of the imperative process algebra ACP τ ǫ -I and its extension with recursion is given (Section 2).Next, it is explained in this process algebra what it means that a given process computes a given function (Section 3).After that, a version of the sequential random access machine model of computation is described in the setting introduced in the previous two sections ( Section 4).Following that, an asynchronous parallel random access machine model of computation and a synchronous parallel random access machine model of computation are described in that setting as well (Section 5 and Section 6, respectively).Then, complexity measures for the models of computation presented in the previous three sections are introduced (Section 7).Thereafter, the question whether the presented synchronous parallel random access machine model of computation is a reasonable model of computation is treated (Section 8).Furthermore, a probabilistic variant of the presented random access machine model of computation is described (Section 9).Finally, some concluding remarks are made (Section 10).
Section 2 is an abridged version of [27].Portions of Sections 2-4 of that paper have been copied verbatim or slightly modified.2 The Imperative Process Algebra ACP τ ǫ -I The imperative process algebra ACP τ ǫ -I is an extension of ACP τ ǫ , the version of ACP with empty process and silent step constants that was first presented in [5,Section 5.3].In this section, first a short survey of ACP τ ǫ is given and then ACP τ ǫ -I is introduced as an extension of ACP τ ǫ .Moreover, recursion in the setting of ACP τ ǫ -I is treated and soundness and (semi-)completeness results for the axioms of ACP τ ǫ -I with recursion are presented.

ACP with Empty Process and Silent Step
In this section, a short survey of ACP τ ǫ is given.A more comprehensive treatment of this process algebra can be found in [5].
In ACP τ ǫ , it is assumed that a fixed but arbitrary finite set A of basic actions, with τ, δ, ǫ ∈ A, and a fixed but arbitrary commutative and associative communication function γ : (A ∪ {τ, δ}) × (A ∪ {τ, δ}) → (A ∪ {τ, δ}), such that γ(τ, a) = δ and γ(δ, a) = δ for all a ∈ A ∪ {τ, δ}, have been given.Basic actions are taken as atomic processes.The function γ is regarded to give the result of simultaneously performing any two basic actions for which this is possible, and to be δ otherwise.Henceforth, we write A τ for A ∪ {τ } and A τ δ for A ∪ {τ, δ}.
The algebraic theory ACP τ ǫ has one sort: the sort P of processes.This sort is made explicit to anticipate the need for many-sortedness later on.The algebraic theory ACP τ ǫ has the following constants and operators to build terms of sort P: a basic action constant a : P for each a ∈ A; -a silent step constant τ : P; -a inaction constant δ : P; -a empty process constant ǫ : P; -a binary alternative composition or choice operator + : P × P → P; -a binary sequential composition operator • : P × P → P; -a binary parallel composition or merge operator : P × P → P; -a binary left merge operator ⌊⌊ : P × P → P; -a binary communication merge operator | : P × P → P; -a unary encapsulation operator ∂ H : P → P for each H ⊆ A and for H = A τ ; -a unary abstraction operator τ I : P → P for each I ⊆ A.
It is assumed that there is a countably infinite set X of variables of sort P, which contains x, y and z.Terms are built as usual.Infix notation is used for the binary operators.The following precedence conventions are used to reduce the need for parentheses: the operator • binds stronger than all other binary operators and the operator + binds weaker than all other binary operators.
The constants a (a ∈ A), τ , ǫ, and δ can be explained as follows: a denotes the process that first performs the observable action a and then terminates successfully; -τ denotes the process that first performs the unobservable action τ and then terminates successfully; -ǫ denotes the process that terminates successfully without performing any action; -δ denotes the process that cannot do anything, it cannot even terminate successfully.
Let t and t ′ be closed ACP τ ǫ terms denoting processes p and p ′ , respectively.Then the operators +, •, , ∂ H (H ⊆ A or H = A τ ), and τ I (I ⊆ A) can be explained as follows: t + t ′ denotes the process that behaves as either p or p ′ ; -t • t ′ denotes the process that behaves as p and p ′ in sequence; -t t ′ denotes the process that behaves as p and p ′ in parallel; -∂ H (t) denotes the process that behaves as p, except that actions from H are blocked from being performed; -τ I (t) denotes the process that behaves as p, except that actions from I are turned into the unobservable action τ .
The operators ⌊⌊ and | are of an auxiliary nature.They make a finite axiomatization of ACP τ ǫ possible.The axioms of ACP τ ǫ are presented in Table 1.In this table, a, b, and α stand for arbitrary members of A τ δ , H stands for an arbitrary subset of A or the set A τ , and I stands for an arbitrary subset of A. So, CM3, CM7, D0-D4, T0-T4, and BE are actually axiom schemas.In this paper, axiom schemas will usually be referred to as axioms.
The term ∂ Aτ (x) • ∂ Aτ (y) occurring in axiom CM1E is needed to handle successful termination in the presence of ǫ: it stands for the process that behaves the same as ǫ if both x and y stand for a process that has the option to behave the same as ǫ and it stands for the process that behaves the same as δ otherwise.In [5,Section 5.3], the symbol √ is used instead of ∂ Aτ .
Notice that the equation α • δ = α would be derivable from the axioms of ACP τ ǫ if operators ∂ H where H = A τ and τ ∈ H were added to ACP τ ǫ .The notation n i=1 t i , where n ≥ 1, will be used for right-nested alternative compositions.For each n ∈ N + ,1 the term n i=1 t i is defined by induction on n as follows: In addition, the convention will be used that 0 i=1 t i = δ.Moreover, we write ∂ a and τ a , where a ∈ A, for ∂ {a} and τ {a} , respectively.
In this section, ACP τ ǫ -I, imperative ACP τ ǫ , is introduced as an extension of ACP τ ǫ .In [27], the paper in which ACP τ ǫ -I was first presented, a comprehensive treatment of this imperative process algebra can be found.ACP τ ǫ -I extends ACP τ ǫ with features to communicate data between processes, to change data involved in a process in the course of the process, and to proceed at certain stages of a process in a way that depends on the changing data.
In ACP τ ǫ -I, it is assumed that the following has been given with respect to data: a many-sorted signature Σ D that includes: • a sort D of data and a sort B of bits; • constants of sort D and/or operators with result sort D; • constants 0 and 1 of sort B and operators with result sort B; a minimal algebra D of the signature Σ D in which the carrier of sort B has cardinality 2 and the equation 0 = 1 does not hold.
In ACP τ ǫ -I, it is moreover assumed that a finite or countably infinite set V of flexible variables has been given.A flexible variable is a variable whose value may change in the course of a process. 2e write D for the set of all closed terms over the signature Σ D that are of sort D.
A flexible variable valuation is a function from V to D. Flexible variable valuations are intended to provide the data values -which are members of D's carrier of sort D -assigned to flexible variables when an ACP τ ǫ -I term of sort D is evaluated.To fit better in an algebraic setting, they provide closed terms from D that denote those data values instead.
Because D is a minimal algebra, for each sort S that is included in Σ D , each member of D's carrier of sort S can be represented by a closed term over Σ D that is of sort S.
In the rest of this paper, for each sort S that is included in Σ D , let ct S be a function from D's carrier of sort S to the set of all closed terms over Σ D that are of sort S such that, for each member d of D's carrier of sort S, the term ct S (d) represents d.We write d, where d is a member of D's carrier of sort S, for ct S (d) if it is clear from the context that d stands for a closed term of sort S representing d.
Flexible variable valuations are used in Sections 4-6 and 9 to represent the data enviroments referred to in Section 1.
Let V ⊆ V. Then a V -indexed data environment is a function from V to D's carrier of sort D. Let µ be a V -indexed data environment and ρ be a flexible variable valuation.Then Below, the sorts, constants and operators of ACP τ ǫ -I are introduced.The operators of ACP τ ǫ -I include two variable-binding operators.The formation rules for ACP τ ǫ -I terms are the usual ones for the many-sorted case (see e.g.[34,38]) and in addition the following rule: if O is a variable-binding operator O : S 1 × . ..× S n → S that binds a variable of sort S ′ , t 1 , . . ., t n are terms of sorts S 1 , . . ., S n , respectively, and X is a variable of sort S ′ , then OX(t 1 , . . ., t n ) is a term of sort S.
An extensive formal treatment of the phenomenon of variable-binding operators can be found in [32].ACP τ ǫ -I has the following sorts: the sorts included in Σ D , the sort C of conditions, and the sort P of processes.
For each sort S included in Σ D other than D, ACP τ ǫ -I has only the constants and operators included in Σ D to build terms of sort S.
ACP τ ǫ -I has, in addition to the constants and operators included in Σ D to build terms of sorts D, the following constants to build terms of sort D: for each v ∈ V, the flexible variable constant v : D. We write C for the set of all closed ACP τ ǫ -I terms of sort C. ACP τ ǫ -I has, in addition to the constants and operators of ACP τ ǫ , the following operators to build terms of sort P:

We write
an n-ary data parameterized action operator a : D n → P for each a ∈ A, for each n ∈ N; -a unary assignment action operator v:= : D → P for each v ∈ V; -a binary guarded command operator :→ : C × P → P; -a unary evaluation operator V ρ : P → P for each ρ ∈ V → D.
We write P for the set of all closed ACP τ ǫ -I terms of sort P. It is assumed that there are countably infinite sets of variables of sort D and C and that the sets of variables of sort D, C, and P are mutually disjoint and disjoint from V.
The same notational conventions are used as before.Infix notation is also used for the additional binary operators.Moreover, the notation [v := e], where v ∈ V and e is a ACP τ ǫ -I term of sort D, is used for the term v := (e).Each term from C can be taken as a formula of a first-order language with equality of D by taking the flexible variable constants as additional variables of sort D. The flexible variable constants are implicitly taken as additional variables of sort D wherever the context asks for a formula.In this way, each term from C can be interpreted in D as a formula.
The notation φ ⇔ ψ, where φ and ψ are ACP τ ǫ -I terms of sort C, is used for the term (φ ⇒ ψ) ∧ (ψ ⇒ φ).The axioms of ACP τ ǫ -I (given below) include an equation φ = ψ for each two terms φ and ψ from C for which the formula φ ⇔ ψ holds in D.
Let a be a basic action from A, e 1 , . . ., e n , and e be terms from D, φ be a term from C, and t be a term from P denoting a process p. Then the additional operators to build terms of sort P can be explained as follows: the term a(e 1 , . . ., e n ) denotes the process that first performs the data parameterized action a(e 1 , . . ., e n ) and then terminates successfully; -the term [v:=e] denotes the process that first performs the assignment action [v := e], whose intended effect is the assignment of the result of evaluating e to flexible variable v, and then terminates successfully; -the term φ :→ t denotes the process that behaves as p if condition φ holds and as δ otherwise; -the term V ρ (t) denotes the process that behaves as p, except that each subterm of t that belongs to D is evaluated using flexible variable valuation ρ updated according to the assignment actions that have taken place at the point where the subterm is encountered.
Evaluation operators are a variant of state operators (see e.g.[4]).
The following closed ACP τ ǫ -I term is reminiscent of a program that computes the difference between two integers by subtracting the smaller one from the larger one (i, j, d ∈ V): That is, the final value of d is the absolute value of the result of subtracting the initial value of i from the initial value of j.Let ρ be an flexible variable valuation such that ρ(i) = 11 and ρ(j) = 3.Then the following equation can be derived from the axioms of ACP τ ǫ -I given below: This equation shows that in the case where the initial values of i and j are 11 and 3 the final value of d is 8, which is the absolute value of the result of subtracting 11 from 3.
A flexible variable valuation ρ can be extended homomorphically from flexible variables to ACP τ ǫ -I terms of sort D and ACP τ ǫ -I terms of sort C. Below, these extensions are denoted by ρ as well.Moreover, we write ρ[v → e] for the flexible variable valuation ρ ′ defined by ρ The subsets A par , A ass , and A of P referred to below are defined as follows: The elements of A are the terms from P that denote the processes that are considered to be atomic.Henceforth, we write A τ for A ∪ {τ }, A δ for A ∪ {δ}, and A τ δ for A ∪ {τ, δ}.
The axioms of ACP τ ǫ -I are the axioms presented in Tables 1 and 2, where α stands for an arbitrary term from A τ δ , H stands for an arbitrary subset of A or the set A τ , I stands for an arbitrary subset of A, e, e 1 , e 2 , . . .and e ′ , e ′ 1 , e ′ 2 , . . .stand for arbitrary terms from D, φ and ψ stand for arbitrary terms from C, v stands for an arbitrary flexible variable from V, and ρ stands for an arbitrary flexible variable valuation from V → D.Moreover, a, b, and c stand for arbitrary members of A τ δ in Table 1 and for arbitrary members of A in Table 2.

ACP τ ǫ -I with Recursion
In this section, recursion in the setting of ACP τ ǫ -I is treated.A closed ACP τ ǫ -I term of sort P denotes a process with a finite upper bound to the number of actions that it can perform.Recursion allows the description of processes without a finite upper bound to the number of actions that it can perform.
A recursive specification over ACP τ ǫ -I is a set {X i = t i | i ∈ I}, where I is a finite set, each X i is a variable from X , each t i is a ACP τ ǫ -I term of sort P in which only variables from {X i | i ∈ I} occur, and X i = X j for all i, j ∈ I with i = j.We write vars(E), where E is a recursive specification over ACP τ ǫ -I, for the set of all variables that occur in E. Let E be a recursive specification and let X ∈ vars(E).Then there exists a unique equation in E whose left-hand side is X.This equation is called the recursion equation for X in E.
Below, guarded linear recursive specifications over ACP τ ǫ -I are introduced.The set L of linear ACP τ ǫ -I terms is inductively defined by the following rules: Let X be a variable from X and let t be an ACP τ ǫ -I term in which X occurs.Then an occurrence of X in t is guarded if t has a subterm of the form α • t ′ where α ∈ A and t ′ contains this occurrence of X.Notice that an occurrence of a variable in a linear ACP τ ǫ -I term may be not guarded.A guarded linear recursive specification over ACP τ ǫ -I is a recursive specification {X i = t i | i ∈ I} over ACP τ ǫ -I where each t i is a linear ACP τ ǫ -I term, and there does not exist an infinite sequence i 0 i 1 . . .over I such that, for each k ∈ N, there is an occurrence of X i k+1 in t i k that is not guarded.
A solution of a guarded linear recursive specification E over ACP τ ǫ -I in some model of ACP τ ǫ -I is a set {p X | X ∈ vars(E)} of elements of the carrier of sort P in that model such that each equation in E holds if, for all X ∈ vars(E), X is assigned p X .A guarded linear recursive specification has a unique solution under the equivalence defined in [27] for ACP τ ǫ -I extended with guarded linear recursion.If {p X | X ∈ vars(E)} is the unique solution of a guarded linear recursive specification E, then, for each X ∈ vars(E), p X is called the X-component of the unique solution of E.
ACP τ ǫ -I is extended with guarded linear recursion by adding constants for solutions of guarded linear recursive specifications over ACP τ ǫ -I and axioms concerning these additional constants.For each guarded linear recursive specification E over ACP τ ǫ -I and each X ∈ vars(E), a constant X|E of sort P, that stands for the X-component of the unique solution of E, is added to the constants of ACP τ ǫ -I.The equation RDP and the conditional equation RSP given in Table 3 are added to the axioms of ACP τ ǫ -I.In this table, X stands for an arbitrary variable from X , t stands for an arbitrary ACP τ ǫ -I term of sort P, E stands for an arbitrary guarded linear recursive specification over ACP τ ǫ -I, and the notation t|E is used for t with, for all X ∈ vars(E), all occurrences of X in t replaced by X|E .Side conditions restrict what X, t and E stand for.
We write ACP τ ǫ -I+REC for the resulting theory.Furthermore, we write P rec for the set of all closed ACP τ ǫ -I+REC terms of sort P.
RDP and RSP together postulate that guarded linear recursive specifications over ACP τ ǫ -I have unique solutions.Because RSP introduces conditional equations in ACP τ ǫ -I+REC, it is understood that conditional equational logic is used in deriving equations from the axioms of ACP τ ǫ -I+REC.A complete inference system for conditional equational logic can for example be found in [5,16].
The following closed ACP τ ǫ -I+REC term is reminiscent of a program that computes by repeated subtraction the quotient and remainder of dividing a non-negative integer by a positive integer (i, j, q, r ∈ V): where E is the guarded linear recursive specification that consists of the following two equations (Q, R ∈ X ): Let ρ be an flexible variable valuation such that ρ(i) = 11 and ρ(j) = 3.Then the following equation can be derived from the axioms of ACP τ ǫ -I+REC: This equation shows that in the case where the initial values of i and j are 11 and 3 the final values of q and r are 3 and 2, which are the quotient and remainder of dividing 11 by 3.
In [27], an equational axiom schema CFAR (Cluster Fair Abstraction Rule) is added to ACP τ ǫ -I+REC.CFAR expresses that every cluster of τ actions will be exited sooner or later.This is a fairness assumption made in the verification of many properties concerning the external behaviour of systems.We will write ACP τ ǫ -I+REC+CFAR for the theory ACP τ ǫ -I+REC extended with CFAR.We write T ⊢ t = t ′ , where T is ACP τ ǫ -I+REC or ACP τ ǫ -I+REC+CFAR, to indicate that the equation t = t ′ is derivable from the axioms of T using a complete inference system for conditional equational logic.

Soundness and Completeness Results
In [27], a structural operational semantics of ACP τ ǫ -I+REC is presented and an equivalence relation ↔ rb on P rec based on this structural operational semantics is defined.This equivalence relation reflects the idea that two processes are equivalent if they can simulate each other insofar as their observable potentials to make transitions by performing actions and to terminate successfully are concerned, taking into account the assigments of data values to flexible variables under which the potentials are available.
In this section, soundness and (semi-)completeness results for the axioms of ACP τ ǫ -I+REC+CFAR with respect to ↔ rb are presented.The proofs can be found in [27].
The axiom system of ACP τ ǫ -I+REC+CFAR is sound with respect to ↔ rb for equations between terms from P rec .
Theorem 1 (Soundness).For all terms t, t ′ ∈ P rec , t = t ′ is derivable from the axioms of ACP τ ǫ -I+REC+CFAR only if t ↔ rb t ′ .The axiom system of ACP τ ǫ -I+REC+CFAR is incomplete with respect to ↔ rb for equations between terms from P rec and there is no straightforward way to rectify this.Below two semi-completeness results are presented.The next two lemmas are used in the proofs of those results.
A term t ∈ P rec is called abstraction-free if no abstraction operator occurs in t.
Lemma 1.For all abstraction-free t ∈ P rec , there exists a guarded linear recursive specification E and X ∈ vars(E) such that ACP τ ǫ -I+REC ⊢ t = X|E .
Lemma 2. For all bool-conditional t ∈ P rec , there exists a guarded linear recursive specification E and The following two theorems are the semi-completeness results referred to above.

Results about the Evaluation Operators
For a better understanding of the evaluation operators, some results about these rather unfamiliar operators are given in this section.
The following lemma tells us that a closed term of the form V ρ (t) equals a bool-conditional closed term.Lemma 3.For all t ∈ P rec , for all ρ ∈ V → D, there exists a bool-conditional Proof.This is straightforwardly proved by induction on the length of t, case distinction on the structure of t, and in the case of the constants for solutions of guarded linear recursive specifications additionally by induction on the structure of the right-hand side of a recursion equation.

⊓ ⊔
The following theorem is a soundness and completeness result for closed terms of the form V ρ (t).
Proof.This follows immediately from Theorem 1, Theorem 3, and Lemma 3. ⊓ ⊔ Below, an elimination theorem for closed terms of the form V ρ (t) is presented.In preparation, the subsets B and B cf of P are introduced.
The set B of basic ACP τ ǫ -I terms is inductively defined by the following rules: Lemma 4. For all bool-conditional t ∈ P, there exists a bool-conditional t ′ ∈ B such that ACP τ ǫ -I ⊢ t = t ′ .Proof.This is straightforwardly proved by induction on the length of t and case distinction on the structure of t.

⊓ ⊔
The set B cf of condition-free basic ACP τ ǫ -I terms is inductively defined by the following rules: Lemma 5.For all bool-conditional t ∈ B, there exists a t ′ ∈ B cf such that ACP τ ǫ -I ⊢ t = t ′ .Proof.This is easily proved by induction on the structure of t.

⊓ ⊔
A term t ∈ P rec is called a finite process term if there exists a term t ′ ∈ P such that ACP τ ǫ -I+REC+CFAR ⊢ t = t ′ .The following theorem tells us that a finite process term of the form V ρ (t) equals a condition-free basic term.
Theorem 5.For all t ∈ P rec and ρ ∈ V → D for which V ρ (t) is a finite process term, there exists a t ′ ∈ B cf such that ACP τ ǫ -I+REC+CFAR ⊢ V ρ (t) = t ′ .Proof.This follows immediately from Lemmas 3, 4, and 5.

⊓ ⊔
The terms from B cf are reminiscent of computation trees.In Section 3, use is made of the fact that each finite process term of the form V ρ (t) equals such a term.Not every term from B cf corresponds to a computation tree of which each path represents a computation that eventually halts, not even when it concerns a computation tree with a single path.
A term t ∈ P rec is called a terminating process term if there exists a term t ′ ∈ B cf such that ACP τ ǫ -I+REC+CFAR ⊢ t = t ′ and t ′ can be formed by applying only the formation rules 2, 3, and 4 of B cf .Table 4. Axioms for the projection operators Table 5. Axioms for the action renaming operators

Extensions
In this section, two extensions of ACP τ ǫ -I are treated, namely an extension with projection and an extension with action renaming.It is not unusual to come across these extensions in applications of ACP-style process algebras.The first extension is treated here because projections can be used to determine the maximum number of actions that a finite process can perform.The second extension is treated here because action renaming enables to easily define the synchronous variant of the parallel composition operator of ACP τ ǫ -I needed later in this paper.Let T be either ACP τ ǫ -I or one of its extensions introduced before.T can be extended with projection by adding, for each n ∈ N, a unary projection operator π n : P → P to the operators of T and adding the axioms given in Table 4 to the axioms of T .In this table, n stands for an arbitrary natural number, α stands for an arbitrary term from A δ , and φ stands for an arbitrary term from C.
Let t is a closed term of the extended theory.Then the projection operator π n can be explained as follows: π n (t) denotes the process that behaves the same as the process denoted by t except that it terminates successfully after n actions have been performed.
Let T be either ACP τ ǫ -I or one of its extensions introduced before.Then we will write T +PR for T extended with the projection operators π n and the axioms PR1-PR6 from Table 4.
Let T be either ACP τ ǫ -I or one of its extensions introduced before.T can be extended with action renaming by adding, for each function f : A → A such that f (α) = α for all α ∈ A ass , a unary action renaming operator ρ f : P → P to the operators of T and adding the axioms given in Table 5 to the axioms of T .In this table, f stands for an arbitrary function f : A → A such that f (α) = α for all α ∈ A ass , α stands for an arbitrary term from A, and φ stands for an arbitrary term from C.
Let t be a closed term of the extended theory.Then the action renaming operator ρ f can be explained as follows: ρ f (t) denotes the process that behaves the same as the process denoted by t except that, where the latter process performs an action α, the former process performs the action f (α).
Let T be either ACP τ ǫ -I or one of its extensions introduced before.Then we will write T +RN for T extended with the action renaming operators ρ f and the axioms RN1-RN7 from Table 5.

Computation and the RAM Conditions
In order to investigate whether ACP τ ǫ -I+REC can play a role in the field of models of computation, it has to be explained in the setting of ACP τ ǫ -I+REC what it means that a given process computes a given function.This requires that assumptions about D have to be made.The assumptions concerned are given in this section.They are based on the idea that the data environment of a computational process consists of one or more RAM (Random Access Machine) memories.Because the assumptions amount to conditions to be satisfied by D, they are called the RAM conditions on D. It is also made precise in this section what it means, in the setting of ACP τ ǫ -I+REC where D satisfies the RAM conditions, that a given process computes a given partial function from ({0, 1} * ) n to {0, 1} * (n ∈ N).

The RAM Conditions
The memory of a RAM consists of a countably infinite number of registers which are numbered by natural numbers.Each register is capable of containing a bit string of arbitrary length.The contents of the registers constitute the state of the memory of the RAM.The execution of an instruction by the RAM amounts to carrying out an operation on its memory state that changes the content of at most one register or to testing a property of its memory state.The RAM conditions are presented in this section using the notions of a RAM memory state, a RAM operation, and a RAM property.
A RAM memory state is a function σ :N → {0, 1} * that satisfies the condition that there exists an i ∈ N such that, for all j ∈ N, σ(i + j) = λ. 5We write Σ ram for the set of all RAM memory states.
Let σ be a RAM memory state.Then, for all i ∈ N, σ(i) is the content of the register with number i in memory state σ.The condition on σ expresses that the part of the memory that is actually in use remains finite.
The input region and output region of a function o : Σ ram → Σ ram , written IR(o) and OR(o), respectively, are the subsets of N defined as follows: Let o : Σ ram → Σ ram .Then OR(o) consists of the numbers of all registers that can be affected by o; and IR(o) consists of the numbers of all registers that can affect the registers whose numbers are in OR(o) under o.
A basic RAM operation is a function o : Σ ram → Σ ram that satisfies the condition that IR(o) is finite and OR(o) has cardinality 0 or 1.We write O ram for the set of all basic RAM operations.
Let o be a basic RAM operation and σ be a RAM memory state.Then carrying out o on a RAM memory in state σ changes the state of the RAM memory into o(σ).The condition on o expresses that the content of at most one register can be affected and that, if there is such a register, only a finite number of registers can affect it.
The following theorem states that each basic RAM operation transforms states of a RAM memory that coincide on its input region to states that coincide on its output region.
Proof.It is easy to see that the 4-tuple (N, {0, 1} * , Σ ram , O ram ) is a computer according to Definition 3.1 from [26].From this and Theorem 3.1 from [26], the theorem follows immediately.

⊓ ⊔
The input region of a function p : Σ ram → {0, 1}, written IR(p) is the subset of N defined as follows: Let p : Σ ram → {0, 1}.Then IR(p) consists of the numbers of all registers that can affect what the value of p is.
A basic RAM property is a function p : Σ ram → {0, 1} that satisfies the condition that IR(p) is finite.We write P ram for the set of all basic RAM properties.
Let p be a basic RAM property and σ be a RAM memory state.Then testing the property p on a RAM memory in state σ yields the value p(σ) and does not change the state of the RAM memory.The condition on p expresses that only a finite number of registers can affect what this value is.We say that p holds in σ if p(σ) = 1.
The following theorem states that each basic RAM property holds in some state of a RAM memory if and only if it holds in all states of the RAM memory that coincide with that state on its input region.
. Because of this and the fact that is also a computer according to Definition 3.1 from [26], this theorem now follows immediately from Theorem 3.1 from [26].
⊓ ⊔ With basic RAM operations only computational processes can be considered whose data environment consists of one RAM memory.Below, n-RAM operations are introduced to remove this restriction.They are defined such that the basic RAM operations are exactly the 1-RAM operations.
An n-RAM operation We write O n-ram , where n ∈ N + , for the set of all n-RAM operations.
The function From this it follows that the basic RAM operation o ′ and the k ∈ N referred to in the above definition are unique if they exist.
The operations from n≥1 O n-ram will be referred to as RAM operations.
In a similar way as n-RAM operations, n-RAM properties are defined.The basic RAM properties are exactly the 1-RAM properties.
The properties from n≥1 P n-ram will be referred to as RAM properties.
The RAM conditions on D are: 1. the signature Σ D of D includes: a sort BS of bit strings and a sort N of natural numbers; -constants λ, 0, 1 : BS and a binary operator : BS × BS → BS; -constants 0, 1 : N and a binary operator + : N × N → N; -a constant σ λ : D and a ternary operator ⊕ : D × N × BS → D; 2. the sorts, constants, and operators mentioned under 1 are interpreted in D as follows: the sort BS is interpreted as the set {0, 1} * , the sort N is interpreted as the set N, and the sort D is interpreted as the set Σ ram ; -the constant λ : BS is interpreted as the empty bit string, the constants 0, 1 : BS are interpreted as the bit strings with the bit 0 and 1, respectively, as sole element, and the operator :BS×BS → BS is interpreted as the concatenation operation on {0, 1} * ; the constants 0, 1 : N are interpreted as the natural numbers 0 and 1, respectively, and the operator + : N × N → N is interpreted as the addition operation on N; -the constant σ λ : D is interpreted as the unique σ ∈ Σ ram such that σ(i) = λ for all i ∈ N and the operator ⊕ : D × N × BS → D is interpreted as the override operation defined by ⊕(σ, i, w)(i) = w and, for all j ∈ N with i = j, ⊕(σ, i, w)(j) = σ(j); 3. the signature Σ D of D is restricted as follows: for each operator from Σ D , the sort of its result is D only if the sort of each of its arguments is D or the operator is ⊕; -for each operator from Σ D , the sort of its result is B only if the sort of each of its arguments is D; 4. the interpretation of the operators mentioned under 3 is restricted as follows: each operator with result sort D other than ⊕ is interpreted as a RAM operation; -each operator with result sort B is interpreted as a RAM property.
The RAM conditions make it possible to explain what it means that a given process computes a given partial function from ({0, 1} * ) n to {0, 1} * (n ∈ N).Moreover, the RAM conditions are nonrestrictive: presumably they allow to deal with all proposed versions of the RAM model of computation as well as all proposed models of parallel computation that are based on a version of the RAM model and the idea that the data environment of a computational process consists of one or more RAM memories.In this section, we make precise in the setting of ACP τ ǫ -I+REC+CFAR, where D is assumed to satisfy the RAM conditions, what it means that a given process computes a given partial function from ({0, 1} * ) n to {0, 1} * (n ∈ N).
In the rest of this paper, D is assumed to satisfy the RAM conditions.Moreover, it is assumed that m ∈ V.
Henceforth, the notation ρ w1,...,wn , where w 1 , . . ., If t ∈ P rec is a finite process term, then there is a finite upper bound to the number of actions that the process denoted by t can perform.
The depth of a finite process term t ∈ P rec , written depth(t), is defined as follows: for all t ∈ P, depth(t) = min{n ∈ N | π n (t) = t}.This means that depth(t) is the maximum number of actions other than τ that the process denoted by t can perform.
We say that t computes F if there exists a W : N → N such that t computes F in W steps, we say that F is a computable function if there exists a t ∈ P such that t computes F , and we say that t is a computational process if there exists a F : ({0, 1} * ) n → {0, 1} * such that t computes F .We write CP rec for the set {t ∈ P rec | t is a computational process}.With the above definition, we can establish whether a process of the kind considered in the current setting computes a given partial function from ({0, 1} * ) n to {0, 1} * (n ∈ N) by equational reasoning using the axioms of ACP τ ǫ -I+REC+ CFAR.This setting is more general than the setting provided by any known version of the RAM model of computation.It is not suitable as a model of computation itself.However, various known models of computation can be defined by fixing which RAM operations and which RAM properties belong to D and by restricting the computational processes to the ones of a certain form.To the best of my knowledge, the models of computation that can be dealt with in this way include all proposed versions of the RAM model as well as all proposed models of parallel computation that are based on a version of the RAM model and the idea that the data environment of a computational process consists of one or more RAM memories.
Whatever model of computation is obtained by fixing the RAM operations and the RAM properties and by restricting the computational processes to the ones of a certain form, it is an idealization of a real computer because it offers an unbounded number of registers that can contain a bit string of arbitrary length instead of a bounded number of registers that can only contain a bit string of a fixed length.

The RAMP Model of Computation
In this section, a version of the RAM model of computation is described in the setting introduced in the previous sections.Because it focuses on the processes that are produces by RAMs when they execute their built-in program, the version of the RAM model of computation described in this section is called the RAMP (Random Access Machine Process) model of computation.
First, the operators are introduced that represent the RAM operations and the RAM properties that belong to D in the case of the RAMP model of computation.Next, the interpretation of those operators as a RAM operation or a RAM property is given.Finally, the RAMP model of computation is described.

Operators for the RAMP Model
In this section, the operators that are relevant to the RAMP model of computation are introduced.
In the case of the RAMP model of computation, the set of operators from Σ D that are interpreted in D as RAM operations or RAM properties is the set O RAMP defined as follows: where Binop = {add, sub, and, or} , Unop = {not, shl, shr, mov} , Cmpop = {eq, gt, beq} and We write O p RAMP for the set {cmpop: The following is a preliminary explanation of the operators from O RAMP : carrying out the operation denoted by an operator of the form binop:s 1 :s 2 :d on a RAM memory in some state boils down to carrying out the binary operation named binop on the values that s 1 and s 2 stand for in that state and then changing the content of the register that d stands for into the result of this; -carrying out the operation denoted by an operator of the form unop:s 1 :d on a RAM memory in some state boils down to carrying out the unary operation named unop on the value that s stands for in that state and then changing the content of the register that d stands for into the result of this; carrying out the operation denoted by an operator of the form cmpop:s 1 :s 2 on a RAM memory in some state boils down to carrying out the binary operation named cmpop on the values that s 1 and s 2 stand for in that state.
The value that s i (i = 1, 2) stands for is as follows: immediate: it stands for the shortest bit string representing the natural number i if it is of the form #i; -direct addressing: it stands for the content of the register with number i if it is of the form i; -indirect addressing: it stands for the content of the register whose number is represented by the content of the register with number i if it is of the form @i; and the register that d stands for is as follows: direct addressing: it stands for the register with number i if it is of the form i; -indirect addressing: it stands for the register whose number is represented by the content of the register with number i if it is of the form @i.
The following kinds of operations and relations on bit strings are covered by the operators from O RAMP : arithmetic operations (add, sub), logical operations (and, or, not), bit-shift operations (shl, shr), data-transfer operations (mov), arithmetic relations (eq, gt), and the bit-wise equality relation (beq).The arithmetic operations on bit strings are operations that model arithmetic operations on natural numbers with respect to their binary representation by bit strings, the logical operations on bit strings are bitwise logical operations, and the data transfer operation on bit strings is the identity operation on bit strings (which is carried out when copying bit strings).The arithmetic relations on bit strings are relations that model arithmetic relations on natural numbers with respect to their binary representation by bit strings.

Interpretation of the Operators for the RAMP Model
In this section, the interpretation of the operators from O RAMP in D is defined.
We start with defining auxiliary functions for conversion between natural numbers and bit strings and evaluation of the elements of Src and Dst.
We write • − for proper subtraction of natural numbers.We write ÷ for zerototalized Euclidean division of natural numbers, i.e.Euclidean division made total by imposing that division by zero yields zero (like in meadows, see e.g.[8,7]).We use juxtaposition for concatenation of bit strings.
The natural to bit string function b : N → {0, 1} * is recursively defined as follows: and the bit string to natural function n : {0, 1} * → N is recursively defined as follows: These definitions tell us that, when viewed as the binary representation of a natural number, the first bit of a bit string is considered the least significant bit.Results of applying b have no leading zeros, but the operand of n may have leading zeros.Thus, we have that n(b(n)) = n and b(n(w)) = w ′ , where w ′ is w without leading zeros.
For each σ ∈ Σ ram , the src-valuation in σ function v σ :Src → {0, 1} * is defined as follows: and, for each σ ∈ Σ ram , the dst-valuation in σ function r σ : Dst → N is defined as follows: We continue with defining the operations on bit strings that the operation names from Binop ∪ Unop refer to.
We define the operations on bit strings that the operation names add and sub refer to as follows: These definitions tell us that, although the operands of the operations + and • − may have leading zeros, results of applying these operations have no leading zeros.
We define the operations on bit strings that the operation names and, or, and not refer to recursively as follows: These definitions tell us that, if the operands of the operations ∧ and ∨ do not have the same length, sufficient leading zeros are assumed to exist.Moreover, results of applying these operations and results of applying ¬ can have leading zeros.
We define the operations on bit strings that the operation names shl and shr refer to as follows: These definitions tell us that results of applying the operations ≪ and ≫ can have leading zeros.We have that n(≪ w) = n(w) • 2 and n(≫w) = n(w) ÷ 2. Now, we are ready to define the interpretation of the operators from O RAMP in D. For each o ∈ O RAMP , the interpretation of o in D, written [[o]], is defined as follows: [[add:s 1 :s Clearly, the interpretation of each operator from O o RAMP is a basic RAM operation and the interpretation of each operator from O p RAMP is a basic RAM property.

RAMP Terms and the RAMP Model of Computation
In this section, the RAMP model of computation is characterized in the setting introduced in Sections 2 and 3.However, first the notion of a RAMP term is defined.This notion is introduced to make precise what the set of possible computational processes is in the case of the RAMP model of computation.
In this section, D is fixed as follows: -Σ D is the smallest signature including (a) all sorts, constants, and operators required by the assumptions made about D in ACP τ ǫ -I or the RAM conditions on D and (b) all operators from O RAMP ; -all sorts, constants, and operators mentioned under (a) are interpreted in D as required by the assumptions made about D in ACP τ ǫ -I or the RAM conditions on D; -all operators mentioned under (b) are interpreted in D as defined at the end of Section 4.2.
Moreover, it is assumed that m ∈ V.
A RAM process term, called a RAMP term for short, is a term from P rec that is of the form X|E , where, for each Y ∈ vars(E), the recursion equation for Y in E has one of the following forms: RAMP , and Z, Z ′ ∈ vars(E).We write P RAMP for the set of all RAMP terms, and we write CP RAMP for P RAMP ∩ CP rec .
A process that can be denoted by a RAMP term is called a RAM process or a RAMP for short.So, a RAMP is a process that is definable by a guarded linear recursive specification over ACP τ ǫ -I of the kind described above.As mentioned in Section 1, a basic assumption in this paper is that a model of computation is fully characterized by: (a) a set of possible computational processes, (b) for each possible computational process, a set of possible data environments, and (c) the effect of applying such processes to such environments.D as fixed above and CP RAMP induce the RAMP model of computation: the set of possible computational processes is the set of all processes that can be denoted by a term from CP RAMP ; -for each possible computational process, the set of possible data environments is the set of all {m}-indexed data environments; -the effect of applying the process denoted by a t ∈ CP RAMP to a {m}-indexed data environment µ is V ρ (t), where ρ is a flexible variable valuation that represents µ.
The RAMP model of computation described above is intended to be essentially the same as the standard RAM model of computation extended with logical instructions and bit-shift instructions.The RAMs from that model will be referred to as the BBRAMs (Basic Binary RAMs).There is a strong resemblance between O RAMP and the set I BBRAM of instructions from which the built-in programs of the BBRAMs can be constructed.Because the concrete syntax of the instructions does not matter, I BBRAM can be defined as follows: A BBRAM program is a non-empty sequence C from I BBRAM * in which instructions of the form jmp:p:i with i > ℓ(C) do not occur.We write IS BBRAM for the set of all BBRAM programs.
The execution of an instruction o from O o RAMP by a BBRAM causes the state of its memory to change according to [[o]].The execution of an instruction of the form jmp:p:i or the instruction halt by a BBRAM has no effect on the state of its memory.After execution of an instruction by a BBRAM, the BBRAM proceeds to the execution of the next instruction from its built-in program except when the instruction is of the form jmp:p:i and [[p]] = 1 or when the instruction is halt.In the case that the instruction is of the form jmp:p:i and [[p]] = 1, the execution proceeds to the ith instruction of the program.In the case that the instruction is halt, the execution terminates successfully.
The processes that are produced by the BBRAMs when they execute their built-in program are given by a function M : IS BBRAM → P RAMP that is defined up to consistent renaming of variables as follows: M(c 1 . . .c n ) = X 1 |E , where E consists of, for each i ∈ N with 1 ≤ i ≤ n, an equation where X 1 , . . ., X n are different variable from X .Let C ∈ IS BBRAM .Then M(C) denotes the process that is produced by the BBRAM whose built-in program is C when it executes its built-in program.
The definition of M is in accordance with the descriptions of various versions of the RAM model of computation in the literature on this subject (see e.g.[11,20,1,31]).However, to the best of my knowledge, none of these descriptions is precise and complete enough to allow of a proof of this.
The RAMPs are exactly the processes that can be produced by the BBRAMs when they execute their built-in program.
Theorem 8.For each constant X|E ∈ P rec , X|E ∈ P RAMP iff there exists a C ∈ IS BBRAM such that X|E and M(C) are identical up to consistent renaming of variables.
Proof.It is easy to see that (a) for all C ∈ IS BBRAM , M(C) ∈ P RAMP and (b) M is an bijection up to consistent renaming of variables.From this, the theorem follows immediately.

⊓ ⊔
Notice that, if X|E and X ′ |E ′ are identical up to consistent renaming of variables, then the equation X|E = X ′ |E ′ is derivable from RDP and RSP (and The following theorem is a result concerning the computational power of RAMPs. Theorem 9.For each F : ({0, 1} * ) n → {0, 1} * , there exists a t ∈ P RAMP such that t computes F iff F is Turing-computable.
Proof.By Theorem 8, it is sufficient to show that each BBRAM is Turing equivalent to a Turing machine.The BBRAM model of computation is essentially the same as the BRAM model of computation from [12] extended with bit-shift instructions.It follows directly from simulation results mentioned in [12] (part (3) of Theorem 2.4, part (3) of Theorem 2.5, and part (2) of Theorem 2.6) that each BRAM can be simulated by a Turing machine and vice versa.Because each Turing machine can be simulated by a BRAM, we immediate have that each Turing machine can be simulated by a BBRAM.It is easy to see that the bitshift instructions can be simulated by a Turing machine.From this and the fact that each BRAM can be simulated by a Turing machine, it follows that each BBRAM can be simulated by a Turing machine as well.Hence, each BBRAM is Turing equivalent to a Turing machine.
⊓ ⊔ Henceforth, we write POLY for {f | f : N → N ∧ f is a polynomial function}.The following theorem tells us that the decision problems that belong to P are exactly the decision problems that can be solved by means of a RAMP in polynomially many steps.
Theorem 10.For each F : {0, 1} * → {0, 1}, there exist a t ∈ P RAMP and a W ∈ POLY such that t computes F in W steps iff F ∈ P.
Proof.By Theorem 8, it is sufficient to show that time complexity on BBRAMs under the uniform time measure, i.e. the number of steps, and time complexity on multi-tape Turing machines are polynomially related.The BBRAM is essentially the same as the BRAM model of computation from [12] extended with bit-shift instructions.It follows directly from simulation results mentioned in [12] (part (3) of Theorem 2.4, part (3) of Theorem 2.5, and part (2) of Theorem 2.6) that time complexity on BRAMs under the uniform time measure and time complexity on multi-tape Turing machines are polynomially related.It is easy to see that the bit-shift instructions can be simulated by a multi-tape Turing machine in linear time.Hence, the time complexities remain polynomially related if the BRAM model is extended with the bit-shift instructions.

The APRAMP Model of Computation
In this section, an asynchronous parallel RAM model of computation is described in the setting introduced in Sections 2 and 3.Because it focuses on the processes that are produces by asynchronous parallel RAMs when they execute their builtin programs, the parallel RAM model of computation described in this section is called the APRAMP (Asynchronous Parallel Random Access Machine Process) model of computation.In this model of computation, a computational process is the parallel composition of a number of processes that each has its own private RAM memory.However, together they also have a shared RAM memory for synchronization and communication.
First, the operators are introduced that represent the RAM operations and the RAM properties that belong to D in the case of the APRAMP model of computation.Next, the interpretation of those operators as a RAM operation or a RAM property is given.Finally, the APRAMP model of computation is described.
In the case of the APRAMP model of computation, the set of operators from Σ D that are interpreted in D as RAM operations or RAM properties is the set O PRAMP defined as follows: where Src and Dst are as defined in Section 4.1.
In operators of the forms binop:s 1 :s 2 :d , unop:s 1 :d , and cmpop:s 1 :s 2 from O RAMP , s 1 , s 2 , and d refer to the private RAM memory.In operators of the form loa:@i:d and sto:s:@i from O PRAMP \ O RAMP , s and d refer to the private RAM memory too.The operators of the form loa:@i:d and sto:s:@i differ from the operators of the form mov:@i:d and mov:s:@i, respectively, in that @i stands for the content of the register and the register, respectively, from the shared RAM memory whose number is represented by the content of the register with number i from the private memory.The operators of the form ini:#i initialize the registers from the private memory as follows: the content of the register with number 0 becomes the shortest bit string that represents the natural number i and the content of all other registers becomes the empty bit string.Now, we are ready to define the interpretation of the operators from O PRAMP in D. For each o ∈ O PRAMP , the interpretation of o in D, written [[o]], is as defined in Section 4.2 for operators from O RAMP and as defined below for the additional operators: Clearly, the interpretation of each operator of the form ini:#i is a 1-RAM operation and the interpretation of each operator of the form loa:@i:d or sto:s:@i is a 2-RAM operation.
Below, the APRAMP model of computation is characterized in the setting introduced in Sections 2 and 3.However, first the notion of an APRAMP term is defined.This notion is introduced to make precise what the set of possible computational processes is in the case of the APRAMP model of computation.
In this section, D is fixed as follows: -Σ D is the smallest signature including (a) all sorts, constants, and operators required by the assumptions made about D in ACP τ ǫ -I or the RAM conditions on D and (b) all operators from O PRAMP ; -all sorts, constants, and operators mentioned under (a) are interpreted in D as required by the assumptions made about D in ACP τ ǫ -I or the RAM conditions on D; -all operators mentioned under (b) are interpreted in D as defined above.
Moreover, it is assumed that m ∈ V and, for all i ∈ N + , m i ∈ V. We write An n-APRAM process term (n ∈ N + ), called an n-APRAMP term for short, is a term from P rec that is of the form X 1 |E 1 . . .X n |E n , where, for each i ∈ N + with i ≤ n: for each X ∈ vars(E i ), the recursion equation for X in E i has one of the following forms: (1) We write P APRAMP for the set of all terms t ∈ P rec such that t is an n-APRAMP terms for some n ∈ N + , and we write CP APRAMP for P APRAMP ∩ CP rec .Moreover, we write deg(t), where t ∈ P APRAMP , for the unique n ∈ N + such that t is an n-APRAMP term.
The terms from P APRAMP will be referred as APRAMP terms.
A process that can be denoted by an APRAMP term is called an APRAM process or an APRAMP for short.So, an APRAMP is a parallel composition of processes that are definable by a guarded linear recursive specification over ACP τ ǫ -I of the kind described above.Each of those parallel processes starts with an initialization step in which the number of its private memory is made available in the register with number 0 from its private memory.
Notice that by Lemma 1 and Theorem 2, for all t ∈ P APRAMP , there exists a guarded linear recursive specification E and X ∈ vars(E) such that t ↔ rb X|E .
As mentioned before, a basic assumption in this paper is that a model of computation is fully characterized by: (a) a set of possible computational processes, (b) for each possible computational process, a set of possible data environments, and (c) the effect of applying such processes to such environments.D as fixed above and CP APRAMP induce the APRAMP model of computation: the set of possible computational processes is the set of all processes that can be denoted by a term from CP APRAMP ; -for each possible computational process p, the set of possible data environments is the set of all V m deg(t) -indexed data environments, where t is a term from CP APRAMP denoting p; -the effect of applying the process denoted by a t ∈ CP APRAMP to a V m deg(t)indexed data environment µ is V ρ (t), where ρ is a flexible variable valuation that represents µ.
The APRAMP model of computation described above is intended to be close to the asynchronous parallel RAM model of computation sketched in [10,23,29].9However, the time complexity measure for this model introduced in Section 7 is quite different from the ones proposed in those papers.
The APRAMPs can be looked upon as the processes that can be produced by a collection of BBRAMs with an extended instruction set when they execute their built-in program asynchronously in parallel.
The BBRAMs with the extended instruction set will be referred to as the SMBRAMs (Shared Memory Binary RAMs).There is a strong resemblance between O PRAMP and the set I SMBRAM of instructions from which the built-in programs of the SMBRAMs can be constructed.Because the concrete syntax of the instructions does not matter, I SMBRAM can be defined as follows: An SMBRAM program is a non-empty sequence C from IS SMBRAM * in which instructions of the form jmp:p:i with i > ℓ(C) do not occur.We write IS SMBRAM for the set of all SMBRAM programs.
For the SMBRAMs whose private memory has number i (i ∈ N + ), the processes that are produced when they execute their built-in program are given by a function M i :IS SMBRAM → P APRAMP that is defined up to consistent renaming of variables as follows: , where E i consists of the equation and, for each j ∈ N with 1 ≤ j ≤ n, an equation where Load = {loa:@i: The APRAMPs are exactly the processes that can be produced by a collection of SMBRAMs when they execute their built-in program asynchronously in parallel.
Proof.Let i ∈ N + be such that i ≤ n.It is easy to see that (a) for all C ∈ IS SMBRAM , M i (C) ∈ P APRAMP and (b) M i is an bijection up to consistent renaming of variables.From this, it follows immediately that there exists a C ∈ IS SMBRAM such that X i |E i and M i (C) are identical up to consistent renaming of variables.From this, the theorem follows immediately.
⊓ ⊔ -for each X, Y ∈ vars(E i ) with Y occurring in the right-hand side of the recursion equation for X in E i , the recursion equation for X in E i is of the form (1) iff the recursion equation for Y in E i is not of the form (1); -for each X ∈ vars(E i ), the recursion equation for X in E i is of the form (2) iff X ≡ X i .
We write P SPRAMP for the set of all terms t ∈ P rec such that t is an n-SPRAMP terms for some n ∈ N + , and we write CP SPRAMP for P SPRAMP ∩ CP rec .Moreover, we write deg(t), where t ∈ P SPRAMP , for the unique n ∈ N + such that t is an n-SPRAMP term.
The terms from P SPRAMP will be referred to as SPRAMP terms.
A process that can be denoted by an SPRAMP term is called an SPRAM process or an SPRAMP for short.So, an SPRAMP is a synchronous parallel composition of processes that are definable by a guarded linear recursive specification over ACP τ ǫ -I of the kind described above.Each of those parallel processes starts with an initialization step in which the number of its private memory is made available in the register with number 0 from its private memory.
Notice that by Lemma 1 and Theorem 2, for all t ∈ P SPRAMP , there exists a guarded linear recursive specification E and X ∈ vars(E) such that t ↔ rb X|E .D as fixed above and CP SPRAMP induce the SPRAMP model of computation: the set of possible computational processes is the set of all processes that can be denoted by a term from CP SPRAMP ; -for each possible computational process p, the set of possible data environments is the set of all V m deg(t) -indexed data environments, where t is a term from CP SPRAMP denoting p; -the effect of applying the process denoted by a t ∈ CP SPRAMP to a V m deg(t)indexed data environment µ is V ρ (t), where ρ is a flexible variable valuation that represents µ.
The SPRAMP model of computation described above is intended to be close to the synchronous parallel RAM model of computation sketched in [36]. 10However, that model is a PRIORITY CRCW model whereas the SPRAMP model is essentially an ARBITRARY CRCW model (see [22,19]).This means basically that, in the case that two or more parallel processes attempt to change the content of the same register at the same time, the process that succeeds in its attempt is chosen arbitrarily.Moreover, in the model sketched in [36], the builtin programs of the RAMs that make up a PRAM must be the same whereas the parallel processes that make up an SPRAMP may be different.
The SPRAMPs can be looked upon as the processes that can be produced by a collection of SMBRAMs when they execute their built-in program synchronously in parallel.
For the SMBRAMs whose private memory has number i (i ∈ N + ), the processes that are produced when they execute their built-in program are now given by a function M sync i : IS SMBRAM → P SPRAMP that is defined up to consistent renaming of variables as follows: , where E i consists of the equation and, for each j ∈ N with 1 ≤ j ≤ n, an equation where Load = {loa:@i: The SPRAMPs are exactly the processes that can be produced by a collection of SMBRAMs when they execute their built-in program synchronously in parallel.
Proof.Let i ∈ N + be such that i ≤ n.It is easy to see that (a) for all C ∈ IS SMBRAM , M sync i (C) ∈ P SPRAMP and (b) M sync i is an bijection up to consistent renaming of variables.From this, it follows immediately that there exists a C ∈ IS SMBRAM such that X i |E i and M sync i (C) are identical up to consistent renaming of variables.From this, the theorem follows immediately.

⊓ ⊔
The first synchronous parallel RAM models of computation, e.g. the models proposed in [13,17,36], are older than the first asynchronous parallel RAM models of computation, e.g. the models proposed in [10,23,29].It appears that the synchronous parallel RAM models have been primarily devised to be used in the area of computational complexity and that the asynchronous parallel RAM models have been primarily devised because the synchronous models were considered of restricted value in the area of algorithm efficiency.

Time and Work Complexity Measures
This section concerns complexity measures for the models of computation presented in Sections 4-6.Before the complexity measures in question are introduced, it is made precise in the current setting what a complexity measure is and what the complexity of a computable function from ({0, 1} * ) n to {0, 1} * under a given complexity measure is.

The RAMP Model of Computation
Below, a time complexity measure and a work complexity measure for the RAMP model of computation are introduced.
The sequential uniform time measure is essentially the same as the uniform time complexity measure for the standard RAM model of computation (see e.g [1]).It is an idealized time measure: the simplifying assumption is made that a RAMP performs one step per time unit.That is, this measure actually yields, for a given RAMP and a given data environment, the maximum number of steps that can be performed by the given RAMP before eventually halting in the case where the initial data environment is the given data environment.However, the maximum number of steps can also be looked upon as the maximum amount of work.This makes the sequential uniform time measure a very plausible work measure as well.
The sequential work measure is the complexity measure M SW for CP RAMP defined by for all t ∈ CP RAMP and (w 1 , . . ., w n ) ∈ m∈N ({0, 1} * ) m such that V ρw 1 ,...,wn (t) is a terminating process term.
In the sequential case, it is in accordance with our intuition that the uniform time complexity measure coincides with the work complexity measure.In the parallel case, this is not in accordance with our intuition: it is to be expected that the introduction of parallelism results in a reduction of the amount of time needed but not in a reduction of the amount of work needed.

The APRAMP Model of Computation
Below, a time complexity measure and a work complexity measure for the APRAMP model of computation are introduced.
The asynchronous parallel uniform time measure is the complexity measure M APUT for CP APRAMP defined by where H i is the set of all α ∈ A in which m i does not occur, for all t ∈ CP APRAMP and (w 1 , . . ., w n ) ∈ m∈N ({0, 1} * ) m such that V ρw 1 ,...,wn (t) is a terminating process term.
In the above definition, τ Hi turns steps of the process denoted by V ρw 1 ,...,wn (t) that are not performed by the parallel process whose private memory is referred to by m i into silent steps.Because depth does not count silent steps, depth(τ Hi (V ρw 1 ,...,wn (t))) is the maximum number of steps that the parallel process whose private memory is referred to by m i can perform.
Hence, the asynchronous parallel uniform time measure yields, for a given APRAMP and a given data environment, the maximum over all parallel processes that make up the given APRAMP of the maximum number of steps that can be performed before eventually halting in the case where the initial data environment is the given data environment.Because it yields the maximum number of steps that can be performed by one of the parallel processes that make up the given APRAMP, the asynchronous parallel uniform time measure differs from the asynchronous parallel work measure.
The sequential work measure and the asynchronous parallel work measure are such that comparison of complexities under these measures have some meaning: both concern the maximum number of steps that can be performed by a computational process.
Like all complexity measures introduced in this section, the asynchronous parallel uniform time measure introduced above is a worst-case complexity measure.It is quite different from the parallel time complexity measures that have been proposed for the asynchronous parallel RAM model of computation sketched in [10,23,29].The round complexity measure is proposed as parallel time complexity measure in [10,23] and an expected time complexity measure is proposed as parallel time complexity measure in [29].Neither of those measures is a worst-case complexity measure: the round complexity measure removes certain cases from consideration and the expected time complexity measure is an average-case complexity measure.
It appears that the round complexity measure and the expected time complexity measure are more important to analysis of the efficiency of parallel algorithms whereas the asynchronous parallel time complexity measure introduced above is more important to analysis of the complexity of computational problems that are amenable to solution by a parallel algorithm.After all, the area of computational complexity is mostly concerned with worst-case complexity.
In [29], the asynchronous parallel uniform time measure introduced above is explicitly rejected.Consider the case where there exists an interleaving of the parallel processes that make up an APRAMP that is close to performing all steps of each of the processes uninterrupted by steps of the others.Then the interleaving concerned is not ruled out by synchronization (through the shared memory) and may even be enforced by synchronization.So it may be likely or unlikely to occur.Seen in that light, it is surprising why it is stated in [29] that such an interleaving has "very low probability, yielding a sequential measure".

The SPRAMP Model of Computation
Below, a time complexity measure and a work complexity measure for the SPRAMP model of computation are introduced.
The time complexity measure introduced below is essentially the same as the uniform time complexity measure that goes with the synchronous parallel RAM model of computation sketched in [36] and similar models.
In the above definition, τ sync turns all steps of the process denoted by V ρw 1 ,...,wn (t) other than synchronization steps, i.e. all computational steps, into silent steps.Because depth does not count silent steps, depth(τ sync (V ρw 1 ,...,wn (t))) is the maximum number of synchronization steps that can be performed by the process denoted by V ρw 1 ,...,wn (t) before eventually halting.
Hence, the synchronous parallel uniform time measure yields, for a given SPRAMP and a given data environment, the maximum number of synchronization steps that can be performed by the given SPRAMP before eventually halting in the case where the initial data environment is the given data environment.Because the parallel processes that make up the given SPRAMP synchronize after each computational step, the time between two consecutive synchronization steps can be considered one time unit.Therefore, this measure is a plausible time measure.Clearly, the maximum number of synchronization steps that can be performed by the given SPRAMP and the maximum number of computational steps that can be performed by the given SPRAMP are separate numbers.So the synchronous parallel uniform time measure differs from the synchronous parallel work measure.
The sequential work measure and the synchronous parallel work measure are such that comparison of complexities under these measures have some meaning: both concern the maximum number of computational steps that can be performed by a computational process.
Take an SPRAMP and the APRAMP which is the SPRAMP without the automatic synchronization after each computational step.Assume that at any stage the next step to be taken by any of the parallel processes that make up the APRAMP does not depend on the steps that have been taken by the other parallel processes.Then the synchronous parallel time measure M SPUT yields for the SPRAMP the same result as the asynchronous parallel time measure M APUT yields for the APRAMP.

SPRAMPs and the Parallel Computation Thesis
The SPRAMP model of computation is a simple model based on an idealization of existing shared memory parallel machines that abstracts from synchronization overhead.The synchronous parallel uniform time measure introduced for this model is a simple, hardware independent, and worst-case complexity measure.
The question is whether the SPRAMP model of computation is a reasonable model of parallel computation.A model of parallel computation is generally considered reasonable if the parallel computation thesis holds.In the current setting, this thesis can be phrased as follows: the parallel computation thesis holds for a model of computation if, for each computable partial function from ({0, 1} * ) n to {0, 1} * (n ∈ N), its complexity under the time complexity measure for that model is polynomially related to its complexity under the space complexity measure for the multi-tape Turing machine model of computation.
Before we answer the question whether the SPRAMP model of computation is a reasonable model of parallel computation, we go into a classification of synchronous parallel RAMs.This classification is used later on in answering the question.Below, synchronous parallel RAMs will be called PRAMs for short.
First of all, PRAMs can be classified as PRAMs whose constituent RAMs may execute different programs or PRAMs whose constituent RAMs must exe-cute the same program.The former PRAMs are called MIMD PRAMs and the latter PRAMs are called SIMD PRAMs.
In [22,Section 2.1], PRAMs are classified according to their restrictions on shared memory access as EREW (Exclusive-Read Exclusive-Write), CREW (Concurrent-Read Exclusive-Write) or CRCW (Concurrent-Read Concurrent-Write).CRCW PRAMs are further classified according to their way of resolving write conflicts as COMMON, where all values attempted to be written concurrently into the same shared register must be identical, ARBITRARY, where one of the values attempted to be written concurrently into the same shared register is chosen arbitrarily, or PRIORITY, where the RAMs making up the PRAM are numbered and, from all values attempted to be written concurrently into the same shared register, the one attempted to be written by the RAM with the lowest number is chosen.
Below, the next two lemmas about the above classifications of PRAMs will be used to show that the parallel computation thesis holds for the SPRAMP model of computation.Proof.Assume a fixed instruction set.Part 1.This follows directly from the definitions concerned (the programs involved can be executed directly).
Part 2. This is a special case of Theorem 3 from [39].

⊓ ⊔
The next theorem expresses that the parallel computation thesis holds for the SPRAMP model of computation.
Theorem 13.Let F : ({0, 1} * ) m → {0, 1} * for some m ∈ N be a computable function and let T, S : N → N. Then: if F is of complexity T (n) under the synchronous parallel time complexity measure M SPUT for the SPRAMP model of computation, then there exists a k ∈ N such that F is of complexity O(T (n) k ) under the space complexity measure for the multi-tape Turing machine model of computation; -if F is of complexity S(n) under the space complexity measure for the multitape Turing machine model of computation, then there exists a k ∈ N such that F is of complexity O(S(n) k ) under the synchronous parallel time complexity measure M SPUT for the SPRAMP model of computation provided that S(n) ≥ log(n) for all n ∈ N.
Proof.In [17], SIMDAGs are introduced.SIMDAGs are SIMD PRIORITY CRCW PRAMs with a subset of the instruction set of SMBRAMs as instruction set.Because DSPACE(S(n)) ⊆ NSPACE(S(n)) ⊆ DSPACE(S(n) 2 ), the variant of the current theorem for the SIMDAG model of computation follows immediately from Theorems 2.1 and 2.2 from [17] under a constructibility assumption for S(n).However, the proofs of those theorems go through with the instruction set of SMBRAMs because none of the SMBRAM instructions builds bit strings that are more than O(T (n)) bits long in T (n) time.Moreover, if we take forking variants of SIMDAGs with the instruction set of SMBRAMs (resembling the P-RAMs from [13]), the constructibility assumption for S(n) is not needed.This can be shown in the same way as in the proof of Lemma 1a from [13].
In the rest of this proof, we write E-SIMDAG for a SIMDAG with the instruction set of SMBRAMs and forking E-SIMDAG for a forking variant of an E-SIMDAG.
The variant of the current theorem for the forking E-SIMDAG model of computation follows directly from the above-mentioned facts.
Now forking E-SIMDAGs can be simulated by E-SIMDAGs with O(p) number of SMBRAMs and with the parallel time increased by a factor of O(log(p)), where p is the number of SMBRAMs used by the forking E-SIMDAG concerned.This is proved as in the proof of Lemma 2.1 from [18].The other way round, E-SIMDAGs can be simulated by forking E-SIMDAGs with eventually the same number of SMBRAMs and with the parallel time increased by O(log(p)), where p is the number of SMBRAMs of the E-SIMDAG concerned.This is easy to see: before the programs of the p SMBRAMs involved can be executed directly, the p SMBRAMs must be created by forking and this can be done in O(log(p)) time.It follows immediately from these simulation results that time complexities on forking E-SIMDAGs are polynomially related to time complexities on E-SIMDAGs.
The variant of the current theorem for the E-SIMDAG model of computation follows directly from the variant of the current theorem for the forking E-SIMDAG model of computation and the above-mentioned polynomial relationship.From this, the fact that E-SIMDAGs are actually SIMD PRIORITY CRCW PRAMs that are composed of SMBRAMs, Lemma 7, Lemma 6, and Theorem 12, the current theorem now follows directly.

Probabilistic Computation
In this section, it is first made precise in the setting introduced in Sections 2 and 3 what it means that a given process probabilistically computes a given partial function from ({0, 1} * ) n to {0, 1} * (n ∈ N).Thereafter a probabilistic RAM model of computation and complexity measures for it are described in the setting introduced in Sections 2 and 3.
Recall that D is assumed to satisfy the RAM conditions given in Section 3.1.Like in Section 3.2, it is assumed that m ∈ V.Moreover, it is assumed that toss ∈ A and γ is such that γ(toss, a) = δ for all a ∈ A. We write toss = A\{toss}.
The basic action toss is used to model probabilistic choices.This is possible by the assumptions made about toss: in the process denoted by a term of the form toss • t + toss • t ′ the choice to behave as the process denoted by t or the process denoted by t ′ is made independent of anything, like with tossing a coin.This allows for assuming that the probability that the first process is chosen and the probability that the second process is chosen are both 1  2 .In order to model probabilistic choices in ACP τ ǫ -I+REC+CFAR, the use of the basic action toss has to be restricted to the modeling of probabilistic choices.This restriction is covered by the subset P toss rec of P rec defined as follows: P toss rec is the set of all t ∈ P rec for which there exists a guarded linear recursive specification E and X ∈ vars(E) such that ACP τ ǫ -I+REC+CFAR ⊢ t = X|E and, for all X ′ = t ′ ∈ E in which toss occurs, t ′ is of the form t :→ toss • Y ′ + t :→ toss • Z ′ , where Y ′ , Z ′ ∈ vars(E).By Lemma 1, P toss rec is well-defined.In order to make precise what it means that a given process probabilistically computes a given partial function from ({0, 1} * ) n to {0, 1} * , first three auxiliary notions are defined.that t is a probabilistic computational process if there exists a F : ({0, 1} * ) n → {0, 1} * such that t probabilistically computes F .
If this error probability is bounded below 1  2 for all w 1 , . . ., w n ∈ {0, 1} * for which F (w 1 , . . ., w n ) is defined, it can be made arbitrary small merely by repeating the computation a bounded number of times and taking the majority result.This observation justifies the following definition.

The PrRAMP Model of Computation
In this section, a probabilistic RAM model of computation is described in the setting introduced in Sections 2, 3.1, and 9.1.Because it focuses on the processes that are produces by probabilistic RAMs when they execute their built-in programs, the probabilistic RAM model of computation described in this section is called the PrRAMP (Probabilistic Random Access Machine Process) model of computation.
In the case of the PrRAMP model of computation, the set of operators from Σ D that are interpreted in D as RAM operation or RAM property is the set O PrRAMP defined as follows: For each o ∈ O PrRAMP , the interpretation of o in D, written [[o]], is as defined in the case of the RAMP model of computation in Section 4.2.
In this section, as to be expected, D is fixed as in Section 4.3 for the RAMP model of computation.Moreover, like in Section 4.3, it is assumed that m ∈ V.
Below, the notion of a PrRAMP term is defined.This notion makes precise what the set of possible computational processes is in the case of the PrRAMP model of computation.
A PrRAM process term, called a PrRAMP term for short, is a term from P rec that is of the form X|E , where, for each Y ∈ vars(E), the recursion equation A process that can be denoted by a PrRAMP term is called a PrRAM process or a PrRAMP for short.So, a PrRAMP is a process that is definable by a guarded linear recursive specification over ACP τ ǫ -I of the kind described above.D as fixed above and CP PrRAMP induce the PrRAMP model of computation: the set of possible computational processes is the set of all processes that can be denoted by a term from CP PrRAMP ; -for each possible computational process, the set of possible data environments is the set of all {m}-indexed data environments; -the effect of applying the process denoted by a t ∈ CP PrRAMP to a {m}indexed data environment µ is V ρ (t), where ρ is a flexible variable valuation that represents µ.
To the best of my knowledge, only rough sketches of probabilistic RAMs are given in the computer science literature.The PrRAMP model of computation described above is in a way based on the probabilistic RAMs sketched in [33].
In line with that paper, a variant of BBRAMs (defined below) that allows for probabilistic choices to be made are considered probabilistic RAMs.They will be referred to as the PrBRAMs (Probabilistic Binary RAMs).
There is a strong resemblance between O PrRAMP and the set I PrBRAM of instructions from which the built-in programs of the PrBRAMs can be constructed.Because the concrete syntax of the instructions does not matter, I PrBRAM can be defined as follows: A PrBRAM program a non-empty sequence C from I PrBRAM * in which instructions of the form jmp:p:i or the form prjmp:i with i > ℓ(C) do not occur.We write IS PrBRAM for the set of all PrBRAM programs.
The execution of an instruction of the form prjmp:i by a PrBRAM has no effect on the state of its memory.After execution of an instruction of the form prjmp:i by a PrBRAM, the execution proceeds with probability 1  2 to the ith instruction of its built-in program and with probability 1  2 to the next instruction from its built-in program.
The processes that are produced by the PrBRAMs when they execute their built-in program are given by a function M:IS PrBRAM → P PrRAMP that is defined Theorem 2 in [11] because the following holds for each instruction from I PrBRAM : (a) the execution of the instruction affects at most one register and (b) the execution of the instruction increases the maximum of the number of bits in the registers at most by one.

⊓ ⊔
At the end of Section 9.1, the error probability of probabilistic computational processes was shortly discussed.The complexity class BPP is interesting because, for each F :{0, 1} * → {0, 1}, the reduction of the error probability by repeating the computation is exponential in the number of repetitions if F ∈ BPP (see e.g.[2]).This means that an arbitrary reduction of the error probability is possible by repeating the computation a bounded number of times if F ∈ BPP.Because of that, BPP is nearly as feasible a complexity class as P.An arbitrary reduction is not possible by repeating the computation a bounded number of times if F ∈ PP \ BPP.
In the same vein as the probabilistic variant of the RAMP model of computation has been described above, probabilistic variants of the APRAMP model and the SPRAMP model can be described.

Time and Work Complexity Measures
Below, a probabilistic time complexity measure for the PrRAMP model of computation is introduced.In preparation, it is first made precise what a probabilistic complexity measure is and what the complexity of a probabilistically computable function from ({0, 1} * ) n to {0, 1} * under a given probabilistic complexity measure is.The notion of a complexity measure have to be adapted to the probabilistic case because the probabilistic computation of a function may not always yield the correct result.
Like for the RAMP model of computation, the probabilistic sequential uniform time measure a very plausible work measure as well.
For the probabilistic variants of the APRAMP model and the SPRAMP model, the definitions of the suitable variants of the complexity measures that have been introduced for the APRAMP model and the SPRAMP model are somewhat more involved.

Concluding Remarks
In this paper, it has been studied whether the imperative process algebra ACP τ ǫ -I can play a role in the field of models of computation.Models of computation corresponding to models based on sequential random access machines, asynchronous parallel random access machines, synchronous parallel random access machines, a probabilistic variant of sequential random access machines, and complexity measures for those models could simply and directly be described in a mathematically precise way in the setting of ACP τ ǫ -I.A probabilistic variant of the model based on sequential random access machines and complexity measures for it could also be described, but in a somewhat less simple and direct way.Central in the models described are the computational processes considered instead of the abstract machines that produce those processes when they execute their built-in program.
The work presented in this paper pertains to formalizing models of computation.Little work has been done in this area.Three notable exceptions are [30,40,3].Those papers are concerned with formalization in a theorem prover (HOL4, Isabelle/HOL, Matita) and focusses on some version of the Turing machine model of computation.This makes it impracticable to compare the work presented in those papers with the work presented in this paper.
Whereas it is usual in versions of the RAM model of computation that bit strings are represented by natural numbers, here natural numbers are represented by bit strings.Moreover, the choice has been made to represent the natural number 0 by the bit string 0 and to adopt the empty bit string as the register content that indicates that a register is (as yet) unused.
D for the set of all closed ACP τ ǫ -I terms of sort D. ACP τ ǫ -I has the following constants and operators to build terms of sort C: -a binary equality operator = : B × B → C; -a binary equality operator = : D × D → C; 3 -a truth constant t : C; -a falsity constant f : C; -a unary negation operator ¬ : C → C; -a binary conjunction operator ∧ : C × C → C; -a binary disjunction operator ∨ : C × C → C; -a binary implication operator ⇒ : C × C → C; -a unary variable-binding universal quantification operator ∀ : C → C that binds a variable of sort D; -a unary variable-binding existential quantification operator ∃ : C → C that binds a variable of sort D.

Lemma 6 .Lemma 7 .
Assuming a fixed instruction set: 1. MIMD PRIORITY CRCW PRAMs can be simulated by MIMD ARBI-TRARY CRCW PRAMs with the same number of RAMs and with the parallel time increased by a factor of O(log(p)), where p is the number of RAMs; 2. MIMD ARBITRARY CRCW PRAMs can be simulated by MIMD PRIOR-ITY CRCW PRAMs with the same number of RAMs and the same parallel time.Proof.Assume a fixed instruction set.Part 1.It is shown in [22, Section 3.1] that MIMD PRIORITY CRCW PRAMs can be simulated by MIMD EREW PRAMs with the same number of RAMs and with the parallel time increased by only a factor of O(log(p)), where p is the number of RAMs.It follows directly from the definitions concerned that MIMD EREW PRAMs can be simulated by MIMD ARBITRARY CRCW PRAMs with the same number of RAMs and the same parallel time (the programs involved can be executed directly).Hence, MIMD PRIORITY CRCW PRAMs can be simulated by MIMD ARBITRARY CRCW PRAMs with the same number of RAMs and with the parallel time increased by a factor of O(log(p)), where p is the number of RAMs.Part 2. It follows directly from the definitions concerned that MIMD AR-BITRARY CRCW PRAMs can be simulated by MIMD PRIORITY CRCW PRAMs with the same number of RAMs and the same parallel time (the programs involved can be executed directly).⊓⊔ Assuming a fixed instruction set:1.SIMD PRIORITY CRCW PRAMs can be simulated by MIMD PRIORITY CRCW PRAMs with the same number of RAMs and with the same parallel time;2.MIMD PRIORITY CRCW PRAMs can be simulated by SIMD PRIORITY CRCW PRAMs with the same number of RAMs and with the parallel time increased by a constant factor.

Table 3 .
Axioms for guarded linear recursion s 2 ∈ Src} and O o RAMP for the set O RAMP \ O p RAMP .The operators from O o RAMP are the operators that are interpreted in D as basic RAM operations and the operators from O p RAMP are the operators that are interpreted in D as basic RAM properties.