1 Introduction

The basic model of P systems as introduced in [21] can be considered as a distributed multiset rewriting system, where all objects—if possible—evolve in parallel in the membrane regions and may be communicated through the membranes. But also P systems operating on more complex objects (e.g., strings, arrays) are often considered, too, for instance, see [10].

Besides the maximally parallel derivation mode, many other derivation modes have been investigated during the last two decades. Hence, in this paper, the definitions of the standard derivation modes used for P systems are recalled. Various interpretations of derivation modes known from the P systems area are illustrated and well-known results are presented in a different manner.

Moreover, we consider not only the standard total halting, but also other halting conditions such as unconditional halting, see [7], and partial halting, see [14]. We explain and give some examples how the use of different halting modes may considerably change the computing power of P systems.

Overviews on the field of P systems can be found in the monograph [22] and the Handbook of Membrane Computing [23]; for actual news and results we refer to the P systems webpage [26] as well as to the Bulletin of the International Membrane Computing Society. The reader is assumed to be familiar with the basic definitions and notations of P systems as well as of the commonly used derivation modes and halting conditions.

The rest of the paper is organized as follows: In the next section, basic notions from formal language theory needed in this paper are recalled. In Sect. 3, the definition of the basic model of P systems is given and explained, including the standard derivation modes used in many papers on P systems, the basic types of rules, as well as the main halting conditions found in the literature and considered in more detail in Sect. 7. Some well-known results are summarized in a compact form in Sect. 4; special focus is put on results for catalytic P systems regarding the number of rules needed for simulating (the instructions of) register machines. In Sect. 5, important results for P systems with control mechanisms are recalled, including the variant of P systems with target selection, which is one of the very few models known from the literature of P systems which takes advantage of using a non-trivial membrane structure. An own section then is devoted to a special derivation mode called minimal parallelism and its variants. Examples and results for the halting conditions different from the standard variant of total halting are considered in Sect. 7. A short summary concludes the paper.

2 Prerequisites

The set of integers is denoted by \({\mathbb {Z}}\), and the set of non-negative integers by \({\mathbb {N}}\). Given an alphabet V, a finite non-empty set of abstract symbols, the free monoid generated by V under the operation of concatenation is denoted by \(V^{*}\). The elements of \(V^{*}\) are called strings, the empty string is denoted by \(\lambda\), and \(V^{*}\backslash \{\lambda \}\) is denoted by \(V^{+}\). For an arbitrary alphabet \(V=\{a_{1},\ldots ,a_{n}\}\), the number of occurrences of a symbol \(a_{i}\) in a string x is denoted by \(|x|_{a_{i}}\), while the length of a string x is denoted by \(|x|=\sum _{a_{i}\in V}|x|_{a_{i}}\). The Parikh vector associated with x with respect to \(a_{1},\ldots ,a_{n}\) is (\(|x|_{a_{1}},\ldots ,|x|_{a_{n}}\)). The Parikh image of an arbitrary language L over \(\{a_{1},\ldots ,a_{n}\}\) is the set of all Parikh vectors of strings in L, and is denoted by Ps(L). For a family of languages FL, the family of Parikh images of languages in FL is denoted by PsFL, while for families of languages over a one-letter (d-letter) alphabet, the corresponding sets of non-negative integers (d-vectors with non-negative components) are denoted by NFL (\(N^{d}FL).\)

A (finite) multiset over a (finite) alphabet \(V=\{a_{1},\ldots ,a_{n}\}\) is a mapping \(f:V\rightarrow {\mathbb {N}}\) and can be represented by \(\langle a_{1}^{f(a_{1})},\ldots ,a_{n}^{f(a_{n})}\rangle\) or by any string x for which \((|x|_{a_{1}},\ldots ,|x|_{a_{n}})=(f(a_{1}),\ldots ,f(a_{n}))\). In the following we will not distinguish between a vector \((m_{1},\ldots ,m_{n})\), a multiset \(\langle a_{1}^{m_{1}},\ldots ,a_{n}^{m_{n}}\rangle\) or a string x having \((|x|_{a_{1}},\ldots ,|x|_{a_{n}})=(m_{1},\ldots ,m_{n})\). Fixing the sequence of symbols \(a_{1},\ldots ,a_{n}\) in an alphabet V in advance, the representation of the multiset \(\langle a_{1}^{m_{1}},\ldots ,a_{n}^{m_{n}}\rangle\) by the string \(a_{1}^{m_{1}}\ldots a_{n}^{m_{n}}\) is unique. The set of all finite multisets over an alphabet V is denoted by \(V^{\circ }\).

The family of regular, context-free, and recursively enumerable string languages is denoted by REG, CF, and RE, respectively. For example, \(PsREG=PsCF\), which is the reason why in the area of multiset rewriting CF plays no role at all, and in the area of membrane computing we usually get characterizations of PsREG and PsRE.

An extended Lindenmayer system (an E0L system for short) is a construct \(G = (V, T, P, w)\), where V is an alphabet, \(T \subseteq V\) is the terminal alphabet, \(w \in V^*\) is the axiom, and P is a finite set of non-cooperative rules over V of the form \(a \rightarrow u\). In a derivation step, each symbol present in the current sentential form is rewritten using one rule arbitrarily chosen from P. The language generated by G, denoted by L(G), consists of all the strings over T which can be generated in this way by starting from the initial string w. An E0L system with \(T = V\) is called a 0L system.

For more details of formal language theory, the reader is referred to the monographs and handbooks in this area as [9] and [24].

2.1 Register machines

A register machine is a tuple \(M=(m,B,l_{0},l_{h},P)\), where m is the number of registers, B is a set of labels, \(l_{0}\in B\) is the initial label, \(l_{h}\in B\) is the final label, and P is the set of instructions labeled by elements of B. The instructions of M can be of the following forms:

  • \(l_{1}:(ADD(j),l_{2},l_{3})\), with \(l_{1}\in B\backslash \{l_{h}\}\), \(l_{2},l_{3}\in B\), \(1\le j\le m\).

    Increases the value of register j by one, followed by a non-deterministic jump to instruction \(l_{2}\) or \(l_{3}\). This instruction is usually called increment.

  • \(l_{1}:(SUB(j),l_{2},l_{3})\), with \(l_{1}\in B\backslash \{l_{h}\}\), \(l_{2},l_{3}\in B\), \(1\le j\le m\).

    If the value of register j is zero then jump to instruction \(l_{3}\); otherwise, the value of register j is decreased by one, followed by a jump to instruction \(l_{2}\). The two cases of this instruction are usually called zero-test and decrement, respectively.

  • \(l_h : HALT\). Stops the execution of the register machine.

A configuration of a register machine is described by the contents of each register and by the value of the current label, which indicates the next instruction to be executed. Computations start by executing the instruction \(l_{0}\) of P, and terminate with reaching the HALT-instruction \(l_{h}\).

M is called deterministic if in all ADD-instructions \(p:\left( \mathtt {ADD}\left( r\right) ,q,s\right)\), it holds that \(q=s\); in this case we write \(p:\left( \mathtt {ADD}\left( r\right) ,q\right)\).

Register machines provide a computationally complete model for computations with natural numbers:

In the generating case, we start with empty registers, use the last two registers for the necessary computations and take as results the vectors of natural numbers \(\left( x_{1},\ldots ,x_{d}\right)\) obtained as contents of the first d registers 1 to d in all possible halting computations. Without loss of generality, we may assume that at the beginning of a computation, all registers are empty and that during any computation of M, only the registers \(d+1\) and \(d+2\) can be decremented.

In the accepting case, we start with the natural numbers \(x_{1},\ldots ,x_{d}\) in the first d registers (and with 0 in the registers \(d+1\) and \(d+2\)) and use the two additional registers \(d+1\) and \(d+2\) for the necessary computations; in this case, all registers may be decremented; moreover, the register machine can be assumed to be deterministic, i.e., we only have ADD-instructions of the form \(l_{1}:\left( \mathtt {ADD}\left( j\right) ,l_{2}\right)\), with \(l_{1}\in B{\setminus } \left\{ l_{h}\right\}\), \(l_{2}\in B\), \(1\le j\le m\). The vector \(\left( x_{1},\ldots ,x_{d}\right)\) is accepted if and only if M halts with the natural numbers \(x_{1},\ldots ,x_{d}\) having been given as input in the first d registers.

For these and other useful results on the computational power of register machines, we refer to [20].

3 A general model for hierarchical P systems

We now recall the main definitions of the general model for hierarchical P systems and the basic derivation modes as defined, for example, in [18]. Moreover, we define the halting conditions discussed in this paper.

A (hierarchical) P system (with rules of type X) working in the derivation mode \(\delta\) is a construct

$$\begin{aligned} \varPi =\left( V,T,\mu ,w_1,\dots ,w_m,R_1,\dots ,R_m,f, {\Longrightarrow }_{\varPi ,\delta }\right)\; \mathrm{where}\end{aligned}$$
  • V is the alphabet of objects;

  • \(T\subseteq V\) is the alphabet of terminal objects;

  • \(\mu\) is the hierarchical membrane structure (a rooted tree of membranes) with the membranes uniquely labeled by the numbers from 1 to m;

  • \(w_i\in V^{*}\), \(1\le i\le m\), is the initial multiset in membrane i;

  • \(R_i\), \(1\le i\le m\), is a finite set of rules of type X assigned to membrane i;

  • f is the label of the membrane from which the result of a computation has to be taken from (in the generative case) or into which the initial multiset has to be given in addition to \(w_f\) (in the accepting case);

  • \({\Longrightarrow }_{\varPi ,\delta }\) is the derivation relation under the derivation mode \(\delta\).

The symbol X in “rules of type X” may stand for “evolution”, “communication”, “membrane evolution”, etc. In this paper, we will mainly consider non-cooperative as well as catalytic and purely catalytic rules, see Sect. 3.2.

A configuration is a list of the contents of each membrane region; a sequence of configurations \(C_1,\dots ,C_k\) is called a computation in the derivation mode \(\delta\) if \(C_i{\Longrightarrow }_{\varPi ,\delta } C_{i+1}\) for \(1\le i<k\). The derivation relation \({\Longrightarrow }_{\varPi ,\delta }\) is defined by the set of rules in \(\varPi\) and the given derivation mode which determines the multiset of rules to be applied to the multisets contained in each membrane region.

The language generated by\(\varPi\) is the set of all terminal multisets which can be obtained in the output membrane f starting from the initial configuration \(C_1=(w_1,\dots ,w_m)\) using the derivation mode \(\delta\) in a halting computation, i.e.,

$$\begin{aligned} L_{{\text {gen}},\delta }\left( \varPi \right) =\left\{ \left( C(f)\right) _{T^{\circ }} \mid C_1\overset{*}{\Longrightarrow }_{\varPi ,\delta }C \wedge \lnot \exists C^{\prime }:\ C{\Longrightarrow }_{\varPi ,\delta }C^{\prime }\right\} , \end{aligned}$$

where \(\left( C(f)\right) _{T^{\circ }}\) stands for the terminal part of the multiset contained in the output membrane f of the configuration C; the configuration C is halting, i.e., no further configuration \(C'\) can be derived from it.

The family of languages of multisets generated by P systems of type X with at most n membranes in the derivation mode \(\delta\) is denoted by \(Ps_{{\text {gen}},\delta }OP_n\left( X \right)\).

We may also consider P systems as accepting mechanisms: in membrane f, we add the input multiset \(w_0\) to \(w_f\) in the initial configuration \(C_1=(w_1,\dots ,w_m)\) thus obtaining \(C_1[w_0]=(w_1,\dots ,w_fw_0,\dots ,w_m)\); the input multiset \(w_0\) is accepted if there exists a halting computation in the derivation mode \(\delta\) starting from \(C_1[w_0]\), i.e.,

$$\begin{aligned}L_{{\text {acc}},\delta }\left( \varPi \right) & =\left\{ \vphantom{\overset{*}{\Longrightarrow }_{\varPi ,\delta }}w_0\in T^{\circ }\mid \exists C: \right. \\ & \quad\left.\left( C_1[w_0]\overset{*}{\Longrightarrow }_{\varPi ,\delta }C \wedge \lnot \exists C^{\prime }:\ C{\Longrightarrow }_{\varPi ,\delta }C^{\prime } \right) \right\} . \end{aligned}$$

Then, the family of languages of multisets accepted by P systems of type X with at most n membranes in the derivation mode \(\delta\) is denoted by \(Ps_{{\text {acc}},\delta }OP_n\left( X \right)\).

We finally mention that P systems can also be used to compute functions and relations, with using f both as input and output membrane or even using two different membranes for the input and the output. Yet, in this paper, we will mainly focus on the generating case.

3.1 Derivation modes

The set of all multisets of rules applicable in a P system to a given configuration C is denoted by \(Appl(\varPi , C)\) and can be restricted by imposing specific conditions, thus yielding the following basic derivation modes (for example, see [18] for formal definitions):

  • asynchronous mode (abbreviated asyn): at least one rule is applied;

  • sequential mode (sequ): only one rule is applied;

  • maximally parallel mode (max): a non-extendable multiset of rules is applied;

  • maximally parallel mode with maximal number of rules (\(max_{rules}\)): a non-extendable multiset of rules of maximal possible cardinality is applied;

  • maximally parallel mode with maximal number of objects (\(max_{objects}\)): a non-extendable multiset of rules affecting as many objects as possible is applied.

In [6], the set variants of these derivation modes are considered, i.e., each rule can be applied at most once. Thus, starting from the set of all sets of applicable rules, we obtain the set modes sasyn, smax, \(smax_{rules}\), and \(smax_{objects}\) (the sequential mode is already a set mode by definition):

  • asynchronous set mode (abbreviated sasyn): at least one rule is applied, but each rule at most once;

  • maximally parallel set mode (smax): a non-extendable set of rules is applied;

  • maximally parallel set mode with maximal number of rules (\(smax_{rules}\)): a non-extendable set of rules of maximal possible cardinality is applied;

  • maximally parallel set mode with maximal number of objects (\(smax_{objects}\)): a non-extendable set of rules affecting as many objects as possible is applied.

Let us denote the set of all multisets (possibly only sets) of rules applicable in a P system \(\varPi\) to a given configuration C in the derivation mode \(\delta\) by \(Appl(\varPi , C,\delta )\). We immediately observe that \(Appl(\varPi , C,asyn ) = Appl(\varPi , C)\).

To collect the set and multiset derivation modes, we use the following notations:

\(\quad D_{S} = \{ sequ,sasyn,smax,smax_{rules},smax_{objects}\}\) and

\(\quad D_{M} = \{ asyn,max,max_{rules},max_{objects}\}\).

3.2 Standard rule variants

Non-cooperative rules have the form \(a \rightarrow w\), where a is a symbol and w is a multiset, catalytic rules have the form \(ca \rightarrow cw\), where the symbol c is called the catalyst, and cooperative rules have no restrictions on the form of the left-hand side. These types of rules will be denoted by ncoo (non-cooperative), pcat (purely catalytic), and coo (cooperative); if both non-cooperative and catalytic rules are allowed, we write cat (catalytic).

If the P system has more than one membrane, each symbol on the right-hand side may have assigned a target where the symbol has to be sent after the application of the rule; the targets take into account the tree structure of the membranes:

here:

the symbol stays in the membrane where the rule is applied;

out:

the symbol is sent to the outer membrane, i.e., the membrane enclosing the membrane where the rule is applied;

in:

the symbol is sent to an inner membrane, i.e., a membrane enclosed by the membrane where the rule is applied;

\(in_j\):

the symbol is sent to the inner membrane labeled by j.

3.3 Halting conditions

Besides the standard total halting with no (multi)set of rules being applicable any more to the current configuration, some more variants of halting conditions have been considered in the literature:

total halting (H):

the common halting strategy where the computation stops with no (multi)set of rules being applicable any more

unconditional halting (u):

the result of a computation can be taken from every configuration derived from the initial one (possibly only taking terminal results)

partial halting (h):

the set of rules R is partitioned into disjoint subsets \(R_1\) to \(R_h\), and a computation stops if there is no multiset of rules applicable to the current configuration which contains a rule from every set \(R_j\), \(1\le j\le h\)

halting with states (s):

the configuration with which a derivation may stop must fulfill a recursive condition (which corresponds with a final state)

The variant of unconditional halting was introduced in [7]. Partial halting, for example, was investigated in [3, 4, 14], using the membranes for partitioning the rules. Formal definitions for the halting conditions Hhs can be found in [18].

For \(\beta \in \{ H,h,u,s\}\), we add the halting condition \(\beta\) in the description of the generated or accepted language, i.e., we then write \(L_{\gamma ,\delta ,\beta }\left( \varPi \right)\), \(\gamma \in \left\{ gen,acc\right\}\). The same extension is made for the corresponding families of languages of multisets, i.e., for \(n\ge 1\), we write \(Y_{\gamma ,\delta ,\beta }OP_n\left( X \right)\). By default, \(\beta\) is understood to be the total halting H and then usually omitted in all these notations.

3.4 Flattening

As many variants of P systems can be flattened to only one membrane, see [13], we often may assume the simplest membrane structure of only one membrane which in effect reduces the P system to a multiset processing mechanism, and, observing that \(f=1\), in what follows we then will use the reduced notation

$$\begin{aligned} \varPi =\left( V,T,w,R, {\Longrightarrow }_{\varPi ,\delta }\right) . \end{aligned}$$

In case we use catalysts, we write

$$\begin{aligned} \varPi =\left( V,C,T,w,R, {\Longrightarrow }_{\varPi ,\delta }\right) \end{aligned}$$

with \(C\subseteq (V{\setminus } T)\) denoting the set of catalysts.

For a one-membrane system, the definitions for the language generated by\(\varPi\) and the language accepted by\(\varPi\) using the derivation mode \(\delta\) and the halting condition \(\beta\) can be written in an easier way; for example, with \(v_{T^{\circ }}\) denoting the terminal multiset in the multiset v, we have

$$\begin{aligned} L_{{\text {gen,max}}}\left( \varPi \right)= & {} \left\{ v_{T^{\circ }}\mid w\overset{*}{\Longrightarrow }_{\varPi ,\delta }v \wedge \lnot \exists z:\ v{\Longrightarrow }_{\varPi ,\delta }z\right\} \hbox { and}\\ L_{{\text {acc,max}}}\left( \varPi \right)= & {} \left\{ w_0\in T^{\circ }\mid \exists v:\ \left( ww_0\overset{*}{\Longrightarrow }_{\varPi ,\delta }v \wedge \lnot \exists z:\ v{\Longrightarrow }_{\varPi ,\delta }z \right) \right\} . \end{aligned}$$

The family of languages of multisets generated by one-membrane P systems of type X in the derivation mode \(\delta\) and with the halting condition \(\beta\) is denoted by \(Ps_{{\text {gen}},\delta ,\beta }OP\left( X \right)\).

The family of languages of multisets accepted by one-membrane P systems of type X in the derivation mode \(\delta\) and with the halting condition \(\beta\) is denoted by \(Ps_{{\text {acc}},\delta ,\beta }OP\left( X \right)\).

In the following, we will mainly focus on the generative case, and when writing \(Ps_{\delta ,\beta }OP\left( X \right)\) we by default will mean \(Ps_{{\text {gen}},\delta ,\beta }OP\left( X \right)\).

4 Some well-known results

In this section, we recall some well-known results, which usually are not stated in the compact form given here.

4.1 Non-cooperative rules

Using only non-cooperative rules leaves us on the level of semi-linear sets, as for the derivation with context-free rules (and non-cooperative rules correspond to those), the resulting derivation tree does not depend on an interpretation of a sequential or a parallel derivation of any kind. Moreover, context-free (string or multiset) languages are closed under projections, hence, taking (even only terminal) results out from a specific output membrane does not make any difference. Therefore, we may state the following result:

Theorem 1

For any\(Y\in \{ N,Ps\}\)and any\(n\ge 1\)as well as any derivation mode\(\delta \in D_{S} \cup D_{M}\),

$$\begin{aligned} Y_{{\text {gen}},\delta }OP_{n}\left( ncoo \right) = YREG. \end{aligned}$$

Although P systems working in the maximally parallel derivation mode are a parallel mechanism, we cannot go beyond PsREG, see Theorem 1.

For example, the rule \(a\rightarrow aa\) used in parallel very much reminds us of a 0L system, i.e., a Lindenmayer system of the simplest form, which, when starting from the axiom aa, yields the language \(L_1=\{ a^{2^{n}}\mid n\ge 1 \}\). In order to also get this language with P systems working in one of the maximally parallel derivation modes, we either need some control mechanism (see Sect. 5) or some other special halting condition (see Sect. 7).

4.2 The importance of using catalysts

If in a one-membrane system we only have one catalyst c and only catalytic rules assigned to c, then this corresponds to a sequential use of non-cooperative rules, which together with Theorem 1 yields the following result:

Theorem 2

For any\(Y\in \{ N,Ps\}\)and any derivation mode\(\delta \in D_{S} \cup D_{M}\),

$$\begin{aligned} Y_{{\text {gen}},\delta }OP\left( pcat_{1} \right) = Y_{{\text {gen,sequ }}}OP\left( pcat_{1} \right) = Y_{{\text {gen,sequ }}}OP\left( ncoo\right) = YREG. \end{aligned}$$

Even without additional control mechanisms, only two (three) catalysts are sufficient to obtain computational completeness for (purely) catalytic P systems using the derivation mode max, see [12]. In a more general way, the following results were already proved there:

Theorem 3

For any\(d\ge 1\)and any\(k\ge d+2\),

$$\begin{aligned} Ps_{{\text {acc,max}}}OP\left( pcat_{k+1} \right) = Ps_{{\text {acc,max}}}OP\left( cat_{k} \right) = N^{d}RE. \end{aligned}$$

The complexity of the construction, for all these derivation modes, has been considerably reduced since the original paper from 2005, for example, see [1, 5, 25], and [6].

Although not yet stated in [12], we mention that these results are also valid when replacing the derivation mode max by any other maximally parallel (set) derivation mode, i.e., for any \(\delta\) in

$$\begin{aligned} \{ max,max_{rules},max_{objects},smax,smax_{rules},smax_{objects} \}. \end{aligned}$$

The following theorem states the best results known so far with respect to the number of catalysts and the number of rules for catalytic P systems; the proof follows the one given in [6] for the set maximally parallel derivation modes.

Theorem 4

For any register machine\(M=\left( d,B,L_0,l_h,R\right)\), with\(m\le d\)being the number of decrementable registers, we can construct a catalytic P system

$$\begin{aligned} \varPi =\left( V,C,T,w,R_1, {\Longrightarrow }_{\varPi ,\delta }\right) \end{aligned}$$

which works with any of the maximally parallel (set) derivation modes, i.e., with any\(\delta\)in

$$\begin{aligned} \{ max,max_{rules},max_{objects}, smax,smax_{rules},smax_{objects} \}, \end{aligned}$$

and simulates the computations ofMsuch that

$$\begin{aligned} |R_1|\le \mathtt{ADD}^1(R)+2\times \mathtt{ADD}^2(R)+5\times \mathtt{SUB}(R)+5\times m+1, \end{aligned}$$

where\(\mathtt{ADD}^1(R)\)denotes the number of deterministicADD-instructions inR, \(\mathtt{ADD}^2(R)\)denotes the number of non-deterministicADD-instructions inR, and\(\mathtt{SUB}(R)\)denotes the number ofSUB-instructions inR.

Proof

We simulate a register machine \(M=\left( d,B,l_0,l_h,R\right)\) by a catalytic P system \(\varPi\), with \(m\le d\) being the number of decrementable registers.

For all d registers, \(n_i\) copies of the symbol \(o_i\) are used to represent the value \(n_i\) in register i, \(1\le i\le d\). For each of the m decrementable registers, we take a catalyst \(c_i\) and two specific symbols \(d_i,e_i\), \(1\le i\le m\), for simulating SUB-instructions on these registers. For every \(l\in B\), we use \(p_l\), and also its variants \({\bar{p}}_l, {\hat{p}}_l, {\tilde{p}}_l\) for \(l\in B_{\mathtt{SUB}}\), where \(B_{\mathtt{SUB}}\) denotes the set of labels of SUB-instructions; \(w_0\) stands for the additional input present at the beginning, for example, for the given input in case of accepting systems.

$$\begin{aligned} \varPi= & {} \left( V,C,T,w,R_1, {\Longrightarrow }_{\varPi ,\delta }\right) ,\\ w \,= \, & {} c_{1}\dots c_{m}d_{1}\dots d_{m}p_{1}w_0,\\ V \, = \, & {} C \cup D \cup E \cup T \cup \{ \#\}\cup \{ p_l\mid l\in B\}\\ \cup & \{ {\bar{p}}_l, {\hat{p}}_l, {\tilde{p}}_l\mid l\in B_{\mathtt{SUB}}\} , \\ C= & {} \{c_{i}\mid 1\le i\le m\} ,\\ D= & {} \{d_{i}\mid 1\le i\le m\} ,\\ E= & {} \{e_{i}\mid 1\le i\le m\} ,\\ T= & {} \{o_{i}\mid 1\le i\le d\} ,\\ R_{1}= & {} \{ p_{j}\rightarrow o_{r}{p}_{k}D_m,p_{j}\rightarrow o_{r}{p}_{l}D_m \\ &\mid j:(\mathtt{ADD}(r),k,l)\in R\}\\ \cup& \{ p_{j}\rightarrow {\hat{p}}_je_rD_{m,r}, p_{j}\rightarrow {\bar{p}}_jD_{m,r}, \\ &\ \ {\hat{p}}_j\rightarrow {\tilde{p}}_jD^{\prime }_{m,r}, {\bar{p}}_j\rightarrow p_{k}D_{m}, {\tilde{p}}_j \rightarrow p_{k}D_{m} \\ &\mid j:(\mathtt{SUB}(r),k,l)\in R\}\\ \cup &\{ c_{r}o_{r}\rightarrow c_{r}d_{r},c_{r}d_{r}\rightarrow c_{r}, c_{r\oplus _{m}1}e_r\rightarrow c_{r\oplus _{m}1}\\ & \mid 1\le r\le m\} ,\\ \cup & \{ {d}_{r}\rightarrow \# ,c_r{e}_{r}\rightarrow c_r\# \mid 1\le r\le m\} \\ \cup & \{ \# \rightarrow \# \} . \end{aligned}$$

We define \(r{\oplus _ {m}}1:=r+1\) for \(r<m\) and \(m{\oplus _ {m}}1:=1\).

Usually, every catalyst \(c_i\), \(i\in \{1,\dots ,m\}\), is kept busy with the symbol \(d_i\) using the rule \(c_{i}{d}_{i}\rightarrow c_{i}\), as otherwise the symbols \(d_i\) would have to be trapped by the rule \({d}_{i}\rightarrow \#\), and the trap rule \(\# \rightarrow \#\) then enforces an infinite non-halting computation. Only during the simulation of SUB-instructions on register r, the corresponding catalyst \(c_r\) is left free for decrementing or for zero-checking in the second step of the simulation, and in the decrement case both \(c_r\) and its “coupled” catalyst \(c_{r{\oplus _ {m}}1}\) are needed to be free for specific actions in the third step of the simulation.

For the simulation of instructions, we use the following shortcuts:

$$\begin{aligned} D_m= & {} \prod _{i\in \left\{ 1,\ldots ,m\right\} } d_{i} ,\\ D_{m,r}= & {} \prod _{i\in \left\{ 1,\ldots ,m\right\} {\setminus } \{ r \} } d_{i} ,\\ D^{\prime }_{m,r}= & {} \prod _{i\in \left\{ 1,\ldots ,m\right\} {\setminus } \{ r , r{\oplus _ {m}}1 \} }d_{i} . \end{aligned}$$

The HALT-instruction labeled \(l_h\) is simply simulated by not introducing the corresponding state symbol \(p_{l_h}\), i.e., replacing it by \(\lambda\), in all rules defined in \(R_1\).

Each ADD-instruction \(j:(\mathtt{ADD}(r),k,l)\), for \(r\in \{1,\dots ,d\}\), can easily be simulated by the rules \(p_{j}\rightarrow o_{r}{p}_{k}D_m\) and \(p_{j}\rightarrow o_{r}{p}_{l}D_m\); in parallel, the rules \(c_{i}{d}_{i}\rightarrow c_{i}\), \(1\le i\le m\), have to be carried out, as otherwise the symbols \(d_i\) would have to be trapped by the rules \({d}_{i}\rightarrow \#\).

Each SUB-instruction \(j:(\mathtt{SUB}(r),k,l)\), is simulated as shown in the table listed below (the rules in brackets [ and ] are those to be carried out in case of a wrong choice):

Simulation of the SUB-instruction \(j:(\mathtt{SUB}(r),k,l)\) if

Register r is not empty

Register r is empty

\(p_{j}\rightarrow {\hat{p}}_je_rD_{m,r}\)

\(p_{j}\rightarrow {\bar{p}}_jD_{m,r}\)

\(c_{r}o_{r}\rightarrow c_{r}d_{r} \ [c_r{e}_{r}\rightarrow c_r\# ]\)

\(c_{r}\) should stay idle

\({\hat{p}}_j\rightarrow {\tilde{p}}_jD^{\prime }_{m,r}\)

\({\bar{p}}_j\rightarrow p_{k}D_{m}\)

\(c_{r}d_{r}\rightarrow c_{r} \ [{d}_{r}\rightarrow \# ]\)

\([{d}_{r}\rightarrow \# ]\)

\({\tilde{p}}_j\rightarrow p_{k}D_{m}\)

 

\(c_{r\oplus _{m}1}e_r\rightarrow c_{r{\oplus _ {m}}1}\)

 

In the first step of the simulation of each instruction (ADD-instruction, SUB-instruction, and even HALT-instruction) due to the introduction of \(D_m\) in the previous step (we also start with that in the initial configuration), every catalyst \(c_r\) is kept busy by the corresponding symbol \(d_r\), \(1\le r\le m\). Hence, this also guarantees that the zero-check on register r works correctly enforcing \({d}_{r}\rightarrow \#\) to be applied, as in the case of a wrong choice two symbols \(d_r\) are present. \(\square\)

Exactly the same construction as elaborated above can be used when allowing for \(m+2\) catalysts, with catalyst \(c_{m+1}\) being used with the state symbols and catalyst \(c_{m+2}\) being used with the trap rules.

Yet for the purely catalytic case, only one additional catalyst \(c_{m+1}\) is needed to be used with all the non-cooperative rules, but in this case a slightly more complicated simulation of SUB-instructions is needed, see [25]), where for catalytic P systems

$$\begin{aligned} |R_1|\le 2\times \mathtt{ADD}^1(R)+3\times \mathtt{ADD}^2(R)+6\times \mathtt{SUB}(R)+5\times m+1 \end{aligned}$$

and for purely for catalytic P systems

$$\begin{aligned} |R_1|\le 2\times \mathtt{ADD}^1(R)+3\times \mathtt{ADD}^2(R)+6\times \mathtt{SUB}(R)+6\times m+1 \end{aligned}$$

is shown.

The simulation results established above hold true for register machines and their corresponding (purely) catalytic P systems for generating and accepting systems as well as even for systems computing functions or relations on natural numbers.

Many computational completeness results for variants of P systems are obtained by simulating register machines, which in fact means that a sequential machine has to be simulated by a parallel mechanism. Exactly, this feature of breaking down the parallelism to sequentiality is the main importance of using catalysts: when using a maximally parallel (set) derivation mode \(\delta\), for decrementing the number of a symbol \(o_r\) to carry out the decrement case of a SUB-instruction of the register machine, we cannot use the non-cooperative rule \(o_r \rightarrow \lambda\); instead, we have to use the catalytic rule \(co_r \rightarrow c\).

What happens in the case of two catalysts in purely catalytic P systems (and one catalyst in the case of catalytic P systems), is one of the most intriguing open problems in the area of P systems since long time, e.g., see [17], where it is shown that catalytic P systems with one catalyst can simulate partially blind register machines and partially blind counter automata.

With respect to the importance of using catalytic rules, the set derivation modes offer new opportunities, i.e., using specific control mechanisms they are not needed any more, as eliminating only one symbol \(o_r\) to carry out the decrement case of a SUB-instruction of a register machine now can be done by a non-cooperative rule \(o_r \rightarrow \lambda\), because due to the set restriction, this rule is not applied more than once.

5 Control mechanisms

To reduce the number of catalysts needed for obtaining computational completeness, specific control mechanisms can be used. Some of these control mechanisms are considered in this section. For example, label selection or control languages allow for using only one catalyst (two catalysts) in (purely) catalytic P systems for getting computational completeness, for instance, see [6, 11, 15, 16]. With target selection and maximally parallel set derivation modes, catalysts can even be avoided completely, only non-cooperative rules are needed.

For all the control mechanisms described in this section, as a special example, we will show how the 0L language \(L_1=\{ a^{2^{n}}\mid n\ge 1 \}\) can be generated using the maximally parallel derivation mode.

5.1 P systems with label selection

For all the variants of P systems of type X, we may consider labeling all the rules in the sets \(R_{1},\dots ,R_{m}\) in a one-to-one manner by labels from a set H and taking a set W containing subsets of H. In any derivation step of a P system with label selection\(\varPi\), we first select a set of labels \(U\in W\) and then, in the given derivation mode, we apply a non-empty multiset R of rules such that all the labels of these rules from R are in U.

Example 1

Consider the one-membrane P system

$$\begin{aligned} \varPi= & {} ( V=\{A,a\},T=\{a\},w=AA,R=\{r_1:A\rightarrow AA,r_2:A\rightarrow a\}, \\&\ W=\{ \{r_1\},\{r_2\}\} ,{\Longrightarrow }_{\varPi ,\max } ). \end{aligned}$$

with the labeled rules \(r_1:A\rightarrow AA\) and \(r_2:A\rightarrow a\); only one of these can be used according to the sets of labels in W. Using \(r_1\) in \(n-1\) derivation steps and finally using \(r_2\) yields \(a^{2^{n}}\), for any \(n\ge 1\), i.e., we get \(N_{{\text {gen,max}}}(\varPi )=L_1\), where \(L_1=\{ a^{2^{n}}\mid n\ge 1 \}\).

The families of sets \(Y_{\gamma ,\delta }\left( \varPi \right)\), \(Y\in \left\{ N,Ps\right\}\), \(\gamma \in \left\{ gen,acc\right\}\), and \(\delta \in D_{M}\cup D_{S}\) computed by P systems with label selection with at most m membranes and rules of type X are denoted by \(Y_{\gamma ,\delta }OP_{m}\left( X,ls\right)\).

Theorem 5

\(Y_{\gamma ,\delta }OP\left( cat_{1},ls\right) = Y_{\gamma ,\delta }OP\left( pcat_{2},ls\right) =YRE\)for any\(Y\in \left\{ N,Ps\right\}\), \(\gamma \in \left\{ gen,acc\right\}\), and any maximally parallel (set) derivation mode\(\delta\),

$$\begin{aligned} \delta \in \left\{ max,max_{rules},max_{objects}, smax,smax_{rules},smax_{objects}\right\} . \end{aligned}$$

The proof given in [16] for the maximally parallel mode max can be taken over for the other maximally parallel (set) derivation modes word by word; the only difference again is that in set derivation modes, in non-successful computations where more than one trap symbol \(\#\) has been generated, the trap rule \(\# \rightarrow \#\) is only applied once.

5.2 Controlled P systems and time-varying P systems

Another method to control the application of the labeled rules is to use control languages (see [19] and [2]).

In a controlled P system\(\varPi\), in addition we use a set H of labels for the rules in \(\varPi\), and a string language L over \(2^{H}\) (each subset of H represents an element of the alphabet for L) from a family FL. Every successful computation in \(\varPi\) has to follow a control word \(U_{1}\dots U_{n}\in L\): in derivation step i, only rules with labels in \(U_{i}\) are allowed to be applied (in the underlying derivation mode, for example, max or smax), and after the n-th derivation step, the computation halts; we may relax this end condition, i.e., we may stop after the i-th derivation for any \(i\le n\), and then we speak of weakly controlled P systems. If \(L=\left( U_{1}\dots U_{p}\right) ^{*}\), \(\varPi\) is called a (weakly) time-varying P system: in the computation step \(pn+i\), \(n\ge 0\), rules from the set \(U_{i}\) have to be applied; p is called the period.

Example 2

Consider the one-membrane P system

$$\begin{aligned} \varPi \, = \, & {} ( V=\{A,a\},T=\{a\},w=AA,R=\{r_1:A\rightarrow AA,r_2:A\rightarrow a\}, \\&\ L=\{r_1\}^*\{r_2\} ,{\Longrightarrow }_{\varPi ,\max } ) \end{aligned}$$

with the labeled rules \(r_1:A\rightarrow AA\) and \(r_2:A\rightarrow a\). Using the control word \({r_1}^{n-1}r_2\) means using \(r_1\) in \(n-1\) derivation steps and finally using \(r_2\), thus yielding \(a^{2^{n}}\), for any \(n\ge 1\), i.e., as in Example 1, we get \(N_{{\text {gen,max}}}(\varPi )=L_1\).

As now we do not have to distinguish between non-terminal and terminal symbols due to the use of control words, the same result can be obtained by the much simpler system

$$\begin{aligned} \varPi ^{\prime }= & {} ( V=\{a\},T=\{a\},w=aa,R=\{r_1:a\rightarrow aa\},\\&\ L=\{r_1\}^* ,{\Longrightarrow }_{\varPi ^{\prime },\max } ) \end{aligned}$$

again yielding \(N_{{\text {gen,max}}}(\varPi ^{\prime })=L_1\).

The family of sets \(Y_{\gamma ,\delta }\left( \varPi \right)\), \(Y\in \left\{ N,Ps\right\}\), computed by (weakly) controlled P systems and (weakly) time-varying P systems with period p, with at most m membranes and rules of type X as well as control languages in FL is denoted by \(Y_{\gamma ,\delta }OP_{m}\left( X,C\left( FL\right) \right)\) (\(Y_{\gamma ,\delta }OP_{m}\left( X,wC\left( FL\right) \right)\)) and \(Y_{\gamma ,\delta }OP_{m}\left( X,TV_{p}\right)\) (\(Y_{\gamma ,\delta }OP_{m}\left( X,wTV_{p}\right)\)), respectively, for \(\gamma \in \left\{ gen,acc\right\}\) and \(\delta \in D_{M}\cup D_{S}\).

Theorem 6

\(Y_{\gamma ,\delta }OP\left( cat_{1},\alpha TV_{6}\right) = Y_{\gamma ,\delta }OP\left( pcat_{2},\alpha TV_{6}\right) =YRE\), for any\(\alpha \in \left\{ \lambda ,w\right\}\), \(Y\in \left\{ N,Ps\right\}\), \(\gamma \in \left\{ gen,acc\right\}\), and

$$\begin{aligned} \delta \in \left\{ max,max_{rules},max_{objects}, smax,smax_{rules},smax_{objects}\right\} . \end{aligned}$$

The proof given in [16] for the maximally parallel mode max again can be taken over for the other maximally parallel (set) derivation modes word by word, e.g., see [6].

5.3 Target selection

In P systems with target selection, all objects on the right-hand side of a rule must have the same target, and in each derivation step, for each region a (multi)set of rules—non-empty if possible—having the same target is chosen. In [6], it was shown that for P systems with target selection (abbreviated ts) in the derivation mode smaxno catalyst is needed any more, and with \(smax_{rules}\), we even obtain a deterministic simulation (indicated by the abbreviation detacc) of deterministic register machines:

Theorem 7

For any\(Y\in \left\{ N,Ps\right\}\),

$$\begin{aligned} Y_{{\text {gen ,smax }}}OP\left( ncoo,ts\right) =YRE. \end{aligned}$$

Theorem 8

For any\(Y\in \left\{ N,Ps\right\}\),

$$\begin{aligned} Y_{{\text {detacc}},{\text {smax}}_{rules}}OP\left( ncoo,ts\right) =YRE. \end{aligned}$$

In contrast to all the other variants of P systems, P systems with target selection really take advantage of the membrane structure, no flattening is used or even reasonable. In that sense, this variant of P systems reflects the spirit of membrane systems with a non-trivial membrane structure in the best way.

Example 3

Consider the two-membrane P system

$$\begin{aligned} \varPi= & {} ( V=\{a\},T=\{a\},\mu =[ \, [\ ]_{2}\,]_{1},w_1=aa,w_2=\lambda ,\\&\ R_1=\{ a\rightarrow aa, a\rightarrow (a,in) \} ,R_2=\emptyset ,f=2, {\Longrightarrow }_{\varPi ,\max } ) \end{aligned}$$

with the rule \(a\rightarrow aa\) having target here and the rule \(a\rightarrow (a,in)\) having target in; only one of these two rules can be used in one derivation step according to the condition of target selection. Using \(a\rightarrow aa\) in \(n-1\) derivation steps in the skin membrane and finally using \(a\rightarrow (a,in)\) yields \(a^{2^{n}}\) in the elementary membrane \([ \ ]_{2}\), for any \(n\ge 1\), i.e., we again get \(N_{{\text {gen,max}}}(\varPi )=L_1\).

6 The strangeness of minimal parallelism

There is another derivation mode known from literature, which has two possible basic definitions, but these two variants unfortunately do not yield the same results.

Following the definition given in [18], for the minimally parallel derivation mode (min), we need an additional feature for the set of rules R used in the overall P system, i.e., we consider a partitioning \(\theta\) of R into disjoint subsets \(R_1\) to \(R_h\). Usually, this partitioning of R may coincide with a specific assignment of the rules to the membranes. We observe that this partitioning \(\theta\) may, but need not be the same as the partitioning \(\eta\) used for partial halting.

There are now several possible interpretations of this minimally parallel derivation mode which in an informal way can be described as applying multisets such that from every set \(R_j\), \(1 \le j \le h\), at least one rule—if possible—has to be used (e.g., see [8]). Yet this if possible allows for two possible interpretations:

Minimal parallelism as a restriction of asyn:

As defined in [18], we start with a multiset \(R'\) of rules from \(Appl(\varPi , C, asyn)\) and only take it if it cannot be extended to a multiset \(R'\) of rules from \(Appl(\varPi , C, asyn)\) by some rule from a set \(R_j\) from which so far no rule is in \(R'\).

Minimal parallelism as an extension of smax:

We start with a set \(R'\) of rules from \(Appl(\varPi , C, smax_{\theta })\), where the notion \(smax_{\theta }\) indicates that we are using smax with respect to the partitioning of R into the subsets \(R_1\) to \(R_h\), and then possibly extend it to a multiset \(R''\) of rules from \(Appl(\varPi , C, asyn)\) which contains \(R'\). This definition finally was used in [23] without using the notion smax, because at the moment when this handbook was written the notion of maximally parallel set derivation modes had not been invented yet. Moreover, the use of the notion smax so far was restricted to the discrete topology, where every rule formed its own set \(R_j\); whereas for \(smax_{\theta }\), the condition is fulfilled if one of the rules in the \(R_j\) is used if possible.

Example 4

Consider the one-membrane P system working in the min-mode

$$\begin{aligned} \varPi =\left( V=\{ a,b \},T=\{ b \},w=aa, R=R_1\cup R_2, {\Longrightarrow }_{\varPi ,min }\right) \end{aligned}$$

with \(R_1=\{ a\rightarrow bb \}\) and \(R_2=\{ a\rightarrow bbb \}\) being the partitions of \(R=R_1\cup R_2\).

Starting from smax, we get only one set of rules, i.e., \(R'=\{ a\rightarrow bb , a\rightarrow bbb \}\), whose application yields the result \(b^5\).

In the case of starting with asyn, we may use one of the two rules twice, thus also getting the results \(b^4\) and \(b^6\).

Hence, when two rules are competing for the same objects, the results obtained with the two different definitions may be different, where the set of results obtained when using the first definition will always include the results obtained by the second definition.

The condition that the sets \(R_j\), \(1 \le j \le h\), have to be disjoint may be alleviated, for example, see [4].

6.1 The derivation mode \(min_1\)

A special variant of the minimally parallel derivation mode, with the sets \(R_j\), \(1 \le j \le h\), not being required to be disjoint, is the mode \(min_1\), which in fact means that we stay with \(smax_{\theta }\). Now let \(\theta _{k}\) denote a partioning \(\theta\) with k sets of rules. As an interesting result, we then get the interpretation of a purely catalytic P system using max as a P system using \(min_1\) with the partitioning \(R_j\), \(1 \le j \le k\), where \(R_j\) is the set of non-cooperative rules \(a\rightarrow u\) representing the corresponding catalytic rules \(c_ja\rightarrow c_ju\). Using such a partitioning \(\theta _k\) in k sets of rules corresponding to the sets of rules associated with the k catalysts, we obtain the following result:

Theorem 9

For any\(d\ge 1\)and any\(k\ge d+3\),

$$\begin{aligned}&Ps_{{\text {acc}},{\text {min}}_1}OP\left( ncoo,\theta _k \right) \\ & \quad = Ps_{{\text {gen}},{\text {min}}_1}OP\left( ncoo,\theta _3 \right) = N^{d}RE. \end{aligned}$$

6.2 Minimal parallelism with all applicable sets

There is an even stranger variant for minimal parallelism already defined in [18]:

To a configuration C, we can only apply a multiset of rules which contains at least one rule from each \(R_j\), \(1\le j\le h\), that contains a rule applicable to C, i.e., we take all possible multisets \(R'\) from \(Appl(\varPi , C, asyn)\) which also fulfill the condition that \(R'\cap R_j\ne \emptyset\) provided \(Appl(\varPi , C, asyn)\cap R_j\ne \emptyset\), for all \(1\le j\le h\).

This derivation mode is abbreviated \(all_{aset}min\) in [18] and used under the notion amin in [4].

Example 5

Consider the one-membrane P system from Example 4, now working in the amin-mode,

$$\begin{aligned} \varPi =\left( V=\{ a,b \},T=\{ b \},w=aa, R=R_1\cup R_2, {\Longrightarrow }_{\varPi ,amin }\right) \end{aligned}$$

with \(R_1=\{ a\rightarrow bb \}\) and \(R_2=\{ a\rightarrow bbb \}\).

As both the rule from \(R_1\) and the rule from \(R_2\) are applicable, the only (multi)set of rules applicable to the configuration aa is the same as that one when starting from smax, i.e., \(R'=\{ a\rightarrow bb , a\rightarrow bbb \}\), whose application yields the result \(b^5\).

Yet if we take \(w=a\) instead, then still both the rule from \(R_1\) and the rule from \(R_2\) are applicable, but there are not enough resources of symbols a for applying both rules, hence, no derivation step is possible in this case with the derivation mode amin. On the other hand, with the first two variants of the minimally parallel derivation mode, in both cases we may either apply \(a\rightarrow bb\) or \(a\rightarrow bbb\), thus getting bb and bbb, respectively.

Again, we observe that the results with different definitions of the minimally parallel derivation mode may be different when two rules are competing for the same object(s).

7 Halting conditions

As already mentioned, P systems working in the maximally parallel derivation mode at first sight look like (E)0L systems. Only the total halting condition completely destroys this similarity which looks so obvious at first sight. Yet, this connection between P systems working in the maximally parallel derivation mode and (E)0L systems can be shown when using unconditional halting, see [7].

Besides unconditional halting, in this section we will also discuss some results for partial halting and halting with states. In each case, as in Sect. 5, we will show how to obtain the special multiset language \(L_1=\{ a^{2^{n}}\mid n\ge 1 \}\).

7.1 Unconditional halting

Example 6

Consider the one-membrane P system

$$\begin{aligned} \varPi= & {} ( V=\{a\},T=\{a\},w=aa, R=\{ a\rightarrow aa \} , {\Longrightarrow }_{\varPi ,\max ,u} ) \end{aligned}$$

with the single rule \(a\rightarrow aa\); with every application of this rule the number of symbols a is doubled, i.e., after \(n-1\) derivation steps, \(n\ge 1\), we get \(a^{2^{n}}\), i.e., we obtain \(N_{{\text {gen,max,u}}}(\varPi )=L_1\).

According to the results shown in [7], the following results hold true, if we use extended systems (indicated by the additional symbol E) and only take results from the output membrane which are terminal:

Theorem 10

For any\(Y\in \left\{ N,Ps\right\}\)and any\(m\ge 1\),

$$\begin{aligned} Y_{{\text {gen }} ,\delta ,u }EOP_{m}\left( ncoo \right) = YE0L, \end{aligned}$$

for any maximally parallel derivation mode\(\delta\),

$$\begin{aligned} \delta \in \left\{ max,max_{rules},max_{objects}\right\} . \end{aligned}$$

If we do not use extended systems, i.e., \(V=T\), we immediately obtain the following:

Corollary 1

For any\(Y\in \left\{ N,Ps\right\}\),

$$\begin{aligned} Y_{{\text {gen }},\delta ,u }OP_{1}\left( ncoo \right) = Y0L, \end{aligned}$$

for any maximally parallel derivation mode\(\delta\),

$$\begin{aligned} \delta \in \left\{ max,max_{rules},max_{objects}\right\} . \end{aligned}$$

These results now show the—somehow expected—correspondence between the two parallel mechanisms P systems and Lindenmayer systems.

We finally mention that with unconditional halting, considering acceptance would not make any sense, because according to the standard definition of accepting P systems, in any case they would accept every input.

7.2 Partial halting

Partial halting allows us to stop a derivation as soon as some specific symbols are not present any more:

Example 7

Consider the one-membrane P system

$$\begin{aligned} \varPi= & {} ( V=\{a,s\},T=\{a\},w=as, R_1 \cup R_2, {\Longrightarrow }_{\varPi ,\max ,h} ), \end{aligned}$$

where \(R_1=\{ a\rightarrow aa\}\) and \(R_2=\{ s\rightarrow s, s\rightarrow \lambda \}\) are the two partitions of the rule set \(R=\{ a\rightarrow aa, s\rightarrow s, s\rightarrow \lambda \}\).

As long as one of the rules from \(R_2\) can be applied to the symbol s, the symbols a are doubled as usual by the rule \(a\rightarrow aa\) from \(R_1\). Using \(s\rightarrow s\) in \(n-1\) derivation steps, \(n\ge 1\), and finally applying \(s\rightarrow \lambda\), we get \(a^{2^{n}}\); hence, \(N_{{\text {gen,max,h}}}(\varPi )=L_1\).

Some interesting results for the partial halting may be looked up in [3, 4, 14].

7.3 Halting with states

In general, speaking of states reminds us of mechanisms like register machines; there a computation halts when the halt instruction \(l_h : HALT\) is applied. In simulations of register machines by P systems, the computation often is made halting by applying the final rule \(l_h \rightarrow \lambda\), provided no trap rules are still applicable. When \(l_h\) disappears this means that no instruction label appears any more in the configuration of the simulating P system; such a condition checking for the absence (or presence) of specific symbols in a given configuration is computable, i.e., decidable, and therefore is a condition we can use for halting with states (which ironically in this case means the absence of state symbols).

Example 8

Consider the one-membrane P system

$$\begin{aligned} \varPi= & {} ( V=\{a,s\},T=\{a\},w=as, R\\= & {} \{ a\rightarrow aa, s\rightarrow s, s\rightarrow \lambda \}, {\Longrightarrow }_{\varPi ,\max ,s} ), \end{aligned}$$

which uses the same ingredients as the one considered in Example 7, but instead of partial halting now uses the condition that a computation halts if no symbol s is present any more, which gives the same computations as for the P system in Example 7, with the only difference that the computations halt because of s having been deleted. Thus, we obtain \(N_{{\text {gen,max,s}}}(\varPi )=L_1\).

8 Conclusion

In this paper, the effects of using different derivation modes on the computing power of many variants of hierarchical P systems have been illustrated. Especially, some differences between the maximally parallel derivation modes and the maximally parallel set derivation modes have been exhibited. We have also given an overview on some control mechanisms used for P systems. Moreover, we have discussed the effect of using different halting conditions such as unconditional and partial halting.

Many more relations between derivation modes and halting conditions as well could have been discussed, but this would have gone much beyond such a normal article.