How derivation modes and halting conditions may influence the computational power of P systems

In the area of P systems, besides the standard maximally parallel derivation mode, many other derivation modes have been investigated, too. In this overview paper, many variants of hierarchical P systems using different derivation modes are considered and the effects of using different derivation modes, especially the maximally parallel derivation modes and the maximally parallel set derivation modes, on the generative and accepting power are illustrated. Moreover, an overview on some control mechanisms used for P systems is given. Furthermore, besides the standard total halting, we also consider different halting conditions such as unconditional halting and partial halting and explain how the use of different halting conditions may considerably change the computing power of P systems.


Introduction
The basic model of P systems as introduced in [21] can be considered as a distributed multiset rewriting system, where all objects-if possible-evolve in parallel in the membrane regions and may be communicated through the membranes. But also P systems operating on more complex objects (e.g., strings, arrays) are often considered, too, for instance, see [10].
Besides the maximally parallel derivation mode, many other derivation modes have been investigated during the last two decades. Hence, in this paper, the definitions of the standard derivation modes used for P systems are recalled. Various interpretations of derivation modes known from the P systems area are illustrated and well-known results are presented in a different manner.
Moreover, we consider not only the standard total halting, but also other halting conditions such as unconditional halting, see [7], and partial halting, see [14]. We explain and give some examples how the use of different halting modes may considerably change the computing power of P systems.
Overviews on the field of P systems can be found in the monograph [22] and the Handbook of Membrane Computing [23]; for actual news and results we refer to the P systems webpage [26] as well as to the Bulletin of the International Membrane Computing Society. The reader is assumed to be familiar with the basic definitions and notations of P systems as well as of the commonly used derivation modes and halting conditions. The rest of the paper is organized as follows: In the next section, basic notions from formal language theory needed in this paper are recalled. In Sect. 3, the definition of the basic model of P systems is given and explained, including the standard derivation modes used in many papers on P systems, the basic types of rules, as well as the main halting conditions found in the literature and considered in more detail in Sect. 7. Some well-known results are summarized in a compact form in Sect. 4; special focus is put on results for catalytic P systems regarding the number of rules needed for simulating (the instructions of) register machines. In Sect. 5, important results for P systems with control mechanisms are recalled, including the variant of P systems with target selection, which is one of the very few models known A short version of this paper especially focusing on the influence of choosing different parallel derivation modes was presented at CMC 20, the 20th anniversary edition of the meeting of the membrane systems community, in Curtea de Argeş, Romania, from August 5 to 9, 2019. from the literature of P systems which takes advantage of using a non-trivial membrane structure. An own section then is devoted to a special derivation mode called minimal parallelism and its variants. Examples and results for the halting conditions different from the standard variant of total halting are considered in Sect. 7. A short summary concludes the paper.

Prerequisites
The set of integers is denoted by ℤ , and the set of non-negative integers by ℕ . Given an alphabet V, a finite non-empty set of abstract symbols, the free monoid generated by V under the operation of concatenation is denoted by V * . The elements of V * are called strings, the empty string is denoted by , and V * �{ } is denoted by V + . For an arbitrary alphabet V = {a 1 , … , a n } , the number of occurrences of a symbol a i in a string x is denoted by |x| a i , while the length of a string x is denoted by �x� = ∑ a i ∈V �x� a i . The Parikh vector associated with x with respect to a 1 , … , a n is ( |x| a 1 , … , |x| a n ). The Parikh image of an arbitrary language L over {a 1 , … , a n } is the set of all Parikh vectors of strings in L, and is denoted by Ps(L). For a family of languages FL, the family of Parikh images of languages in FL is denoted by PsFL, while for families of languages over a one-letter (d-letter) alphabet, the corresponding sets of non-negative integers (d-vectors with non-negative components) are denoted by NFL ( N d FL).
A (finite) multiset over a (finite) alphabet V = {a 1 , … , a n } is a mapping f ∶ V → ℕ and can be represented by ⟨a f (a 1 ) 1 , … , a f (a n ) n ⟩ or by any string x for which (|x| a 1 , … , |x| a n ) = (f (a 1 ), … , f (a n )) . In the following we will not distinguish between a vector (m 1 , … , m n ) , a multiset ⟨a m 1 1 , … , a m n n ⟩ or a string x having (|x| a 1 , … , |x| a n ) = (m 1 , … , m n ) . Fixing the sequence of symbols a 1 , … , a n in an alphabet V in advance, the representation of the multiset ⟨a The family of regular, context-free, and recursively enumerable string languages is denoted by REG, CF, and RE, respectively. For example, PsREG = PsCF , which is the reason why in the area of multiset rewriting CF plays no role at all, and in the area of membrane computing we usually get characterizations of PsREG and PsRE. An extended Lindenmayer system (an E0L system for short) is a construct G = (V, T, P, w) , where V is an alphabet, T ⊆ V is the terminal alphabet, w ∈ V * is the axiom, and P is a finite set of non-cooperative rules over V of the form a → u . In a derivation step, each symbol present in the current sentential form is rewritten using one rule arbitrarily chosen from P. The language generated by G, denoted by L(G), consists of all the strings over T which can be generated in this way by starting from the initial string w. An E0L system with T = V is called a 0L system.
For more details of formal language theory, the reader is referred to the monographs and handbooks in this area as [9] and [24].

Register machines
A register machine is a tuple M = (m, B, l 0 , l h , P) , where m is the number of registers, B is a set of labels, l 0 ∈ B is the initial label, l h ∈ B is the final label, and P is the set of instructions labeled by elements of B. The instructions of M can be of the following forms: Increases the value of register j by one, followed by a non-deterministic jump to instruction l 2 or l 3 . This instruction is usually called increment.
If the value of register j is zero then jump to instruction l 3 ; otherwise, the value of register j is decreased by one, followed by a jump to instruction l 2 . The two cases of this instruction are usually called zero-test and decrement, respectively. • l h ∶ HALT . Stops the execution of the register machine.
A configuration of a register machine is described by the contents of each register and by the value of the current label, which indicates the next instruction to be executed. Computations start by executing the instruction l 0 of P, and terminate with reaching the HALT-instruction l h .
M is called deterministic if in all ADD-instructions p ∶ ( (r), q, s) , it holds that q = s ; in this case we write p ∶ ( (r), q). Register machines provide a computationally complete model for computations with natural numbers: In the generating case, we start with empty registers, use the last two registers for the necessary computations and take as results the vectors of natural numbers x 1 , … , x d obtained as contents of the first d registers 1 to d in all possible halting computations. Without loss of generality, we may assume that at the beginning of a computation, all registers are empty and that during any computation of M, only the registers d + 1 and d + 2 can be decremented.
In the accepting case, we start with the natural numbers x 1 , … , x d in the first d registers (and with 0 in the registers d + 1 and d + 2 ) and use the two additional registers d + 1 and d + 2 for the necessary computations; in this case, all registers may be decremented; moreover, the register 1 3 machine can be assumed to be deterministic, i.e., we only have ADD-instructions of the form l 1 ∶ (j), l 2 , with l 1 ∈ B⧵ l h , l 2 ∈ B , 1 ≤ j ≤ m . The vector x 1 , … , x d is accepted if and only if M halts with the natural numbers x 1 , … , x d having been given as input in the first d registers.
For these and other useful results on the computational power of register machines, we refer to [20].

A general model for hierarchical P systems
We now recall the main definitions of the general model for hierarchical P systems and the basic derivation modes as defined, for example, in [18]. Moreover, we define the halting conditions discussed in this paper.
A (hierarchical) P system (with rules of type X) working in the derivation mode is a construct • V is the alphabet of objects; • T ⊆ V is the alphabet of terminal objects; • is the hierarchical membrane structure (a rooted tree of membranes) with the membranes uniquely labeled by the numbers from 1 to m; is a finite set of rules of type X assigned to membrane i; • f is the label of the membrane from which the result of a computation has to be taken from (in the generative case) or into which the initial multiset has to be given in addition to w f (in the accepting case); • ⟹ , is the derivation relation under the derivation mode .
The symbol X in "rules of type X" may stand for "evolution", "communication", "membrane evolution", etc. In this paper, we will mainly consider non-cooperative as well as catalytic and purely catalytic rules, see Sect. 3.2. A configuration is a list of the contents of each membrane region; a sequence of configurations C 1 , … , C k is called a computation in the derivation mode if C i ⟹ , C i+1 for 1 ≤ i < k . The derivation relation ⟹ , is defined by the set of rules in and the given derivation mode which determines the multiset of rules to be applied to the multisets contained in each membrane region.
The language generated by is the set of all terminal multisets which can be obtained in the output membrane f starting from the initial configuration C 1 = (w 1 , … , w m ) using the derivation mode in a halting computation, i.e., where (C(f )) T • stands for the terminal part of the multiset contained in the output membrane f of the configuration C; the configuration C is halting, i.e., no further configuration C ′ can be derived from it.
The family of languages of multisets generated by P systems of type X with at most n membranes in the derivation mode is denoted by Ps gen, OP n (X).
We may also consider P systems as accepting mechanisms: in membrane f, we add the input multiset w 0 to w f in the initial configuration Then, the family of languages of multisets accepted by P systems of type X with at most n membranes in the derivation mode is denoted by Ps acc, OP n (X).
We finally mention that P systems can also be used to compute functions and relations, with using f both as input and output membrane or even using two different membranes for the input and the output. Yet, in this paper, we will mainly focus on the generating case.

Derivation modes
The set of all multisets of rules applicable in a P system to a given configuration C is denoted by Appl( , C) and can be restricted by imposing specific conditions, thus yielding the following basic derivation modes (for example, see [18] for formal definitions): • asynchronous mode (abbreviated asyn): at least one rule is applied; • sequential mode (sequ): only one rule is applied; • maximally parallel mode (max): a non-extendable multiset of rules is applied; • maximally parallel mode with maximal number of rules ( max rules ): a non-extendable multiset of rules of maximal possible cardinality is applied; • maximally parallel mode with maximal number of objects ( max objects ): a non-extendable multiset of rules affecting as many objects as possible is applied.
In [6], the set variants of these derivation modes are considered, i.e., each rule can be applied at most once. Thus, starting from the set of all sets of applicable rules, we obtain the set modes sasyn, smax, smax rules , and smax objects (the sequential mode is already a set mode by definition): • asynchronous set mode (abbreviated sasyn): at least one rule is applied, but each rule at most once; • maximally parallel set mode (smax): a non-extendable set of rules is applied; • maximally parallel set mode with maximal number of rules ( smax rules ): a non-extendable set of rules of maximal possible cardinality is applied; • maximally parallel set mode with maximal number of objects ( smax objects ): a non-extendable set of rules affecting as many objects as possible is applied.
Let us denote the set of all multisets (possibly only sets) of rules applicable in a P system to a given configuration C in the derivation mode by Appl( , C, ) . We immediately observe that Appl( , C, asyn) = Appl( , C).
To collect the set and multiset derivation modes, we use the following notations: D S = {sequ, sasyn, smax, smax rules , smax objects } and D M = {asyn, max, max rules , max objects }.

Standard rule variants
Non-cooperative rules have the form a → w , where a is a symbol and w is a multiset, catalytic rules have the form ca → cw , where the symbol c is called the catalyst, and cooperative rules have no restrictions on the form of the left-hand side. These types of rules will be denoted by ncoo (non-cooperative), pcat (purely catalytic), and coo (cooperative); if both non-cooperative and catalytic rules are allowed, we write cat (catalytic). If the P system has more than one membrane, each symbol on the right-hand side may have assigned a target where the symbol has to be sent after the application of the rule; the targets take into account the tree structure of the membranes: here the symbol stays in the membrane where the rule is applied; out the symbol is sent to the outer membrane, i.e., the membrane enclosing the membrane where the rule is applied; in the symbol is sent to an inner membrane, i.e., a membrane enclosed by the membrane where the rule is applied; in j the symbol is sent to the inner membrane labeled by j.

Halting conditions
Besides the standard total halting with no (multi)set of rules being applicable any more to the current configuration, some more variants of halting conditions have been considered in the literature: total halting (H) the common halting strategy where the computation stops with no (multi)set of rules being applicable any more unconditional halting (u) the result of a computation can be taken from every configuration derived from the initial one (possibly only taking terminal results) partial halting (h) the set of rules R is partitioned into disjoint subsets R 1 to R h , and a computation stops if there is no multiset of rules applicable to the current configuration which contains a rule from every set R j , 1 ≤ j ≤ h halting with states (s) the configuration with which a derivation may stop must fulfill a recursive condition (which corresponds with a final state) The variant of unconditional halting was introduced in [7]. Partial halting, for example, was investigated in [3,4,14], using the membranes for partitioning the rules. Formal definitions for the halting conditions H, h, s can be found in [18].
For ∈ {H, h, u, s} , we add the halting condition in the description of the generated or accepted language, i.e., we then write L , , ( ) , ∈ {gen, acc} . The same extension is made for the corresponding families of languages of multisets, i.e., for n ≥ 1 , we write Y , , OP n (X) . By default, is understood to be the total halting H and then usually omitted in all these notations.

Flattening
As many variants of P systems can be flattened to only one membrane, see [13], we often may assume the simplest membrane structure of only one membrane which in effect reduces the P system to a multiset processing mechanism, and, observing that f = 1 , in what follows we then will use the reduced notation In case we use catalysts, we write with C ⊆ (V⧵T) denoting the set of catalysts.
For a one-membrane system, the definitions for the language generated by and the language accepted by using the derivation mode and the halting condition can be written in an easier way; for example, with v T • denoting the terminal multiset in the multiset v, we have The family of languages of multisets generated by one-membrane P systems of type X in the derivation mode and with the halting condition is denoted by Ps gen, , OP(X).
The family of languages of multisets accepted by onemembrane P systems of type X in the derivation mode and with the halting condition is denoted by Ps acc, , OP(X).
In the following, we will mainly focus on the generative case, and when writing Ps , OP(X) we by default will mean Ps gen, , OP(X).

Some well-known results
In this section, we recall some well-known results, which usually are not stated in the compact form given here.

Non-cooperative rules
Using only non-cooperative rules leaves us on the level of semi-linear sets, as for the derivation with context-free rules (and non-cooperative rules correspond to those), the resulting derivation tree does not depend on an interpretation of a sequential or a parallel derivation of any kind. Moreover, context-free (string or multiset) languages are closed under projections, hence, taking (even only terminal) results out from a specific output membrane does not make any difference. Therefore, we may state the following result: = V, C, T, w, R, ⟹ , Y gen, OP n (ncoo) = YREG.
Although P systems working in the maximally parallel derivation mode are a parallel mechanism, we cannot go beyond PsREG, see Theorem 1.
For example, the rule a → aa used in parallel very much reminds us of a 0L system, i.e., a Lindenmayer system of the simplest form, which, when starting from the axiom aa, yields the language L 1 = {a 2 n | n ≥ 1} . In order to also get this language with P systems working in one of the maximally parallel derivation modes, we either need some control mechanism (see Sect. 5) or some other special halting condition (see Sect. 7).

The importance of using catalysts
If in a one-membrane system we only have one catalyst c and only catalytic rules assigned to c, then this corresponds to a sequential use of non-cooperative rules, which together with Theorem 1 yields the following result:

Theorem 2 For any Y ∈ {N, Ps} and any derivation mode
Even without additional control mechanisms, only two (three) catalysts are sufficient to obtain computational completeness for (purely) catalytic P systems using the derivation mode max, see [12]. In a more general way, the following results were already proved there:

Theorem 3 For any d ≥ 1 and any
The complexity of the construction, for all these derivation modes, has been considerably reduced since the original paper from 2005, for example, see [1,5,25], and [6].
Although not yet stated in [12], we mention that these results are also valid when replacing the derivation mode max by any other maximally parallel (set) derivation mode, i.e., for any in The following theorem states the best results known so far with respect to the number of catalysts and the number of rules for catalytic P systems; the proof follows the one given in [6] for the set maximally parallel derivation modes. Ps acc,max OP pcat k+1 = Ps acc,max OP cat k = N d RE.
{max, max rules , max objects , smax, smax rules , smax objects }. For all d registers, n i copies of the symbol o i are used to represent the value n i in register i, 1 ≤ i ≤ d . For each of the m decrementable registers, we take a catalyst c i and two specific symbols d i , e i , 1 ≤ i ≤ m , for simulating SUBinstructions on these registers. For every l ∈ B , we use p l , and also its variants p l ,p l ,p l for l ∈ B , where B denotes the set of labels of SUB-instructions; w 0 stands for the additional input present at the beginning, for example, for the given input in case of accepting systems.
= V, C, T, w, R 1 , ⟹ , {max, max rules , max objects , smax, smax rules , smax objects }, ∪{p j →p j e r D m,r , p j →p j D m,r , ∪{c r o r → c r d r , c r d r → c r , c r⊕ m 1 e r → c r⊕ m 1 We define r⊕ m 1 ∶= r + 1 for r < m and m⊕ m 1 ∶= 1. Usually, every catalyst c i , i ∈ {1, … , m} , is kept busy with the symbol d i using the rule c i d i → c i , as otherwise the symbols d i would have to be trapped by the rule d i → # , and the trap rule # → # then enforces an infinite non-halting computation. Only during the simulation of SUB-instructions on register r, the corresponding catalyst c r is left free for decrementing or for zero-checking in the second step of the simulation, and in the decrement case both c r and its "coupled" catalyst c r⊕ m 1 are needed to be free for specific actions in the third step of the simulation.
For the simulation of instructions, we use the following shortcuts: The HALT-instruction labeled l h is simply simulated by not introducing the corresponding state symbol p l h , i.e., replacing it by , in all rules defined in R 1 .
Each ADD-instruction j ∶ ( (r), k, l) , for r ∈ {1, … , d} , can easily be simulated by the rules p j → o r p k D m and p j → o r p l D m ; in parallel, the rules c i d i → c i , 1 ≤ i ≤ m , have to be carried out, as otherwise the symbols d i would have to be trapped by the rules d i → #.
Each SUB-instruction j ∶ ( (r), k, l) , is simulated as shown in the table listed below (the rules in brackets [ and ] are those to be carried out in case of a wrong choice): Register r is not empty Register r is empty p j →p j e r D m,r p j →p j D m,r c r o r → c r d r [c r e r → c r #] c r should stay idlê In the first step of the simulation of each instruction (ADD-instruction, SUB-instruction, and even HALT-instruction) due to the introduction of D m in the previous step (we also start with that in the initial configuration), every catalyst c r is kept busy by the corresponding symbol d r , 1 ≤ r ≤ m . Hence, this also guarantees that the zero-check on register r works correctly enforcing d r → # to be applied, as in the case of a wrong choice two symbols d r are present.
◻ Exactly the same construction as elaborated above can be used when allowing for m + 2 catalysts, with catalyst c m+1

3
being used with the state symbols and catalyst c m+2 being used with the trap rules. Yet for the purely catalytic case, only one additional catalyst c m+1 is needed to be used with all the non-cooperative rules, but in this case a slightly more complicated simulation of SUBinstructions is needed, see [25]), where for catalytic P systems and for purely for catalytic P systems is shown.
The simulation results established above hold true for register machines and their corresponding (purely) catalytic P systems for generating and accepting systems as well as even for systems computing functions or relations on natural numbers.
Many computational completeness results for variants of P systems are obtained by simulating register machines, which in fact means that a sequential machine has to be simulated by a parallel mechanism. Exactly, this feature of breaking down the parallelism to sequentiality is the main importance of using catalysts: when using a maximally parallel (set) derivation mode , for decrementing the number of a symbol o r to carry out the decrement case of a SUB-instruction of the register machine, we cannot use the non-cooperative rule o r → ; instead, we have to use the catalytic rule co r → c.
What happens in the case of two catalysts in purely catalytic P systems (and one catalyst in the case of catalytic P systems), is one of the most intriguing open problems in the area of P systems since long time, e.g., see [17], where it is shown that catalytic P systems with one catalyst can simulate partially blind register machines and partially blind counter automata.
With respect to the importance of using catalytic rules, the set derivation modes offer new opportunities, i.e., using specific control mechanisms they are not needed any more, as eliminating only one symbol o r to carry out the decrement case of a SUB-instruction of a register machine now can be done by a non-cooperative rule o r → , because due to the set restriction, this rule is not applied more than once.

Control mechanisms
To reduce the number of catalysts needed for obtaining computational completeness, specific control mechanisms can be used. Some of these control mechanisms are considered in this section. For example, label selection or control languages allow for using only one catalyst (two catalysts) in (purely) catalytic P systems for getting computational completeness, for instance, see [6,11,15,16]. With target selection and maximally parallel set derivation modes, catalysts can even be avoided completely, only non-cooperative rules are needed. For all the control mechanisms described in this section, as a special example, we will show how the 0L language L 1 = {a 2 n | n ≥ 1} can be generated using the maximally parallel derivation mode.

P systems with label selection
For all the variants of P systems of type X, we may consider labeling all the rules in the sets R 1 , … , R m in a oneto-one manner by labels from a set H and taking a set W containing subsets of H. In any derivation step of a P system with label selection , we first select a set of labels U ∈ W and then, in the given derivation mode, we apply a non-empty multiset R of rules such that all the labels of these rules from R are in U.
Example 1 Consider the one-membrane P system with the labeled rules r 1 ∶ A → AA and r 2 ∶ A → a ; only one of these can be used according to the sets of labels in W. Using r 1 in n − 1 derivation steps and finally using r 2 yields a 2 n , for any n ≥ 1 , i.e., we get N gen,max ( ) = L 1 , where The families of sets Y , ( ) , Y ∈ {N, Ps} , ∈ {gen, acc} , and ∈ D M ∪ D S computed by P systems with label selection with at most m membranes and rules of type X are denoted by Y , OP m (X, ls).
Theorem 5 Y , OP cat 1 , ls = Y , OP pcat 2 , ls = YRE for any Y ∈ {N, Ps} , ∈ {gen, acc} , and any maximally parallel (set) derivation mode , The proof given in [16] for the maximally parallel mode max can be taken over for the other maximally parallel (set) derivation modes word by word; the only difference again is that in set derivation modes, in non-successful computations where more than one trap symbol # has been generated, the trap rule # → # is only applied once.

Controlled P systems and time-varying P systems
Another method to control the application of the labeled rules is to use control languages (see [19] and [2]). In a controlled P system , in addition we use a set H of labels for the rules in , and a string language L over 2 H (each subset of H represents an element of the alphabet for L) from a family FL. Every successful computation in has to follow a control word U 1 … U n ∈ L : in derivation step i, only rules with labels in U i are allowed to be applied (in the underlying derivation mode, for example, max or smax), and after the n-th derivation step, the computation halts; we may relax this end condition, i.e., we may stop after the i-th derivation for any i ≤ n , and then we speak of weakly controlled P systems. If L = U 1 … U p * , is called a (weakly) time-varying P system: in the computation step pn + i , n ≥ 0 , rules from the set U i have to be applied; p is called the period.
Example 2 Consider the one-membrane P system with the labeled rules r 1 ∶ A → AA and r 2 ∶ A → a . Using the control word r 1 n−1 r 2 means using r 1 in n − 1 derivation steps and finally using r 2 , thus yielding a 2 n , for any n ≥ 1 , i.e., as in Example 1, we get N gen,max ( ) = L 1 .
As now we do not have to distinguish between non-terminal and terminal symbols due to the use of control words, the same result can be obtained by the much simpler system again yielding N gen,max ( � ) = L 1 .
The family of sets Y , ( ) , Y ∈ {N, Ps} , computed by (weakly) controlled P systems and (weakly) time-varying P systems with period p, with at most m membranes and rules of type X as well as control languages in FL is denoted by Y , OP m (X, C(FL)) ( Y , OP m (X, wC(FL)) ) and Y , OP m X, TV p ( Y , OP m X, wTV p ), respectively, for ∈ {gen, acc} and ∈ D M ∪ D S . The proof given in [16] for the maximally parallel mode max again can be taken over for the other maximally parallel (set) derivation modes word by word, e.g., see [6]. ∈ max, max rules , max objects , smax, smax rules , smax objects .

Target selection
In P systems with target selection, all objects on the right-hand side of a rule must have the same target, and in each derivation step, for each region a (multi)set of rules-non-empty if possible-having the same target is chosen. In [6], it was shown that for P systems with target selection (abbreviated ts) in the derivation mode smax no catalyst is needed any more, and with smax rules , we even obtain a deterministic simulation (indicated by the abbreviation detacc) of deterministic register machines: In contrast to all the other variants of P systems, P systems with target selection really take advantage of the membrane structure, no flattening is used or even reasonable. In that sense, this variant of P systems reflects the spirit of membrane systems with a non-trivial membrane structure in the best way.

Example 3
Consider the two-membrane P system with the rule a → aa having target here and the rule a → (a, in) having target in; only one of these two rules can be used in one derivation step according to the condition of target selection. Using a → aa in n − 1 derivation steps in the skin membrane and finally using a → (a, in) yields a 2 n in the elementary membrane [ ] 2 , for any n ≥ 1 , i.e., we again get N gen,max ( ) = L 1 .

The strangeness of minimal parallelism
There is another derivation mode known from literature, which has two possible basic definitions, but these two variants unfortunately do not yield the same results.
Following the definition given in [18], for the minimally parallel derivation mode (min), we need an additional feature for the set of rules R used in the overall P system, i.e., we consider a partitioning of R into disjoint subsets R 1 to R h . Usually, this partitioning of R may coincide with a specific assignment of the rules to the membranes. We observe that this partitioning may, but need not be the same as the partitioning used for partial halting.
There are now several possible interpretations of this minimally parallel derivation mode which in an informal way can be described as applying multisets such that from every set R j , 1 ≤ j ≤ h , at least one rule-if possible-has to be used (e.g., see [8]). Yet this if possible allows for two possible interpretations: Minimal parallelism as a restriction of asyn As defined in [18], we start with a multiset R ′ of rules from Appl( , C, asyn) and only take it if it cannot be extended to a multiset R ′ of rules from Appl( , C, asyn) by some rule from a set R j from which so far no rule is in R ′ . Minimal parallelism as an extension of smax We start with a set R ′ of rules from Appl( , C, smax ) , where the notion smax indicates that we are using smax with respect to the partitioning of R into the subsets R 1 to R h , and then possibly extend it to a multiset R ′′ of rules from Appl( , C, asyn) which contains R ′ . This definition finally was used in [23] without using the notion smax, because at the moment when this handbook was written the notion of maximally parallel set derivation modes had not been invented yet. Moreover, the use of the notion smax so far was restricted to the discrete topology, where every rule formed its own set R j ; whereas for smax , the condition is fulfilled if one of the rules in the R j is used if possible.

Example 4
Consider the one-membrane P system working in the min-mode with R 1 = {a → bb} and R 2 = {a → bbb} being the parti- Starting from smax, we get only one set of rules, i.e., R � = {a → bb, a → bbb} , whose application yields the result b 5 .
In the case of starting with asyn, we may use one of the two rules twice, thus also getting the results b 4 and b 6 .
Hence, when two rules are competing for the same objects, the results obtained with the two different definitions may be different, where the set of results obtained when using the first definition will always include the results obtained by the second definition.
The condition that the sets R j , 1 ≤ j ≤ h , have to be disjoint may be alleviated, for example, see [4].

The derivation mode min 1
A special variant of the minimally parallel derivation mode, with the sets R j , 1 ≤ j ≤ h , not being required to be disjoint, is the mode min 1 , which in fact means that we stay with smax . Now let k denote a partioning with k sets of rules. As an interesting result, we then get the interpretation of a purely catalytic P system using max as a P system using min 1 with the partitioning R j , 1 ≤ j ≤ k , where R j is the set of non-cooperative rules a → u representing the corresponding catalytic rules c j a → c j u . Using such a partitioning k in k sets of rules corresponding to the sets of rules associated with the k catalysts, we obtain the following result: Theorem 9 For any d ≥ 1 and any k ≥ d + 3,

Minimal parallelism with all applicable sets
There is an even stranger variant for minimal parallelism already defined in [18]: To a configuration C, we can only apply a multiset of rules which contains at least one rule from each R j , 1 ≤ j ≤ h , that contains a rule applicable to C, i.e., we take all possible multisets R ′ from Appl( , C, asyn) which also fulfill the condi- This derivation mode is abbreviated all aset min in [18] and used under the notion amin in [4]. As both the rule from R 1 and the rule from R 2 are applicable, the only (multi)set of rules applicable to the configuration aa is the same as that one when starting from smax, i.e., R � = {a → bb, a → bbb} , whose application yields the result b 5 .
Yet if we take w = a instead, then still both the rule from R 1 and the rule from R 2 are applicable, but there are not enough resources of symbols a for applying both rules, hence, no derivation step is possible in this case with the derivation mode amin. On the other hand, with the first two variants of the minimally parallel derivation mode, in both cases we may either apply a → bb or a → bbb , thus getting bb and bbb, respectively.
Again, we observe that the results with different definitions of the minimally parallel derivation mode may be different when two rules are competing for the same object(s).

Halting conditions
As already mentioned, P systems working in the maximally parallel derivation mode at first sight look like (E)0L systems. Only the total halting condition completely destroys this similarity which looks so obvious at first sight. Yet, this connection between P systems working in the maximally parallel derivation mode and (E)0L systems can be shown when using unconditional halting, see [7].
Besides unconditional halting, in this section we will also discuss some results for partial halting and halting with states. In each case, as in Sect. 5, we will show how to obtain the special multiset language L 1 = {a 2 n | n ≥ 1}.

Unconditional halting
Example 6 Consider the one-membrane P system with the single rule a → aa ; with every application of this rule the number of symbols a is doubled, i.e., after n − 1 derivation steps, n ≥ 1 , we get a 2 n , i.e., we obtain N gen,max,u ( ) = L 1 .
According to the results shown in [7], the following results hold true, if we use extended systems (indicated by the additional symbol E) and only take results from the output membrane which are terminal: These results now show the-somehow expected-correspondence between the two parallel mechanisms P systems and Lindenmayer systems.
We finally mention that with unconditional halting, considering acceptance would not make any sense, because according to the standard definition of accepting P systems, in any case they would accept every input.

Partial halting
Partial halting allows us to stop a derivation as soon as some specific symbols are not present any more: Example 7 Consider the one-membrane P system where R 1 = {a → aa} and R 2 = {s → s, s → } are the two partitions of the rule set R = {a → aa, s → s, s → }.
As long as one of the rules from R 2 can be applied to the symbol s, the symbols a are doubled as usual by the rule a → aa from R 1 . Using s → s in n − 1 derivation steps, n ≥ 1 , and finally applying s → , we get a 2 n ; hence, N gen,max,h ( ) = L 1 .
Some interesting results for the partial halting may be looked up in [3,4,14].

Halting with states
In general, speaking of states reminds us of mechanisms like register machines; there a computation halts when the halt instruction l h ∶ HALT is applied. In simulations of register machines by P systems, the computation often is made halting by applying the final rule l h → , provided no trap rules are still applicable. When l h disappears this means that no ∈ max, max rules , max objects . Y gen , ,u OP 1 (ncoo) = Y0L, ∈ max, max rules , max objects . =(V = {a, s}, T = {a}, w = as, R 1 ∪ R 2 , ⟹ ,max,h ), instruction label appears any more in the configuration of the simulating P system; such a condition checking for the absence (or presence) of specific symbols in a given configuration is computable, i.e., decidable, and therefore is a condition we can use for halting with states (which ironically in this case means the absence of state symbols).
Example 8 Consider the one-membrane P system which uses the same ingredients as the one considered in Example 7, but instead of partial halting now uses the condition that a computation halts if no symbol s is present any more, which gives the same computations as for the P system in Example 7, with the only difference that the computations halt because of s having been deleted. Thus, we obtain N gen,max,s ( ) = L 1 .

Conclusion
In this paper, the effects of using different derivation modes on the computing power of many variants of hierarchical P systems have been illustrated. Especially, some differences between the maximally parallel derivation modes and the maximally parallel set derivation modes have been exhibited. We have also given an overview on some control mechanisms used for P systems. Moreover, we have discussed the effect of using different halting conditions such as unconditional and partial halting.
Many more relations between derivation modes and halting conditions as well could have been discussed, but this would have gone much beyond such a normal article.