# Membrane Computing: Basics and Frontiers

## Abstract

Membrane computing is a branch of natural computing inspired by the structure and the functioning of the living cell, as well as by the cooperation of cells in tissues, colonies of cells, and neural nets. This chapter briefly introduces the basic notions and (types of) results of this research area, also discussing open problems and research topics. Several central classes of computing models (called P systems) are considered: cell-like P systems with symbol objects processed by means of multiset rewriting rules, symport/antiport P systems, P systems with active membranes, spiking neural P systems, and numerical P systems.

## 1 Introduction

Membrane computing is a branch of natural computing initiated in [22] and having as its main goal to abstract computing models from the architecture and the functioning of living cells, considered alone or as parts of higher order structures, such as tissues, organs (brain included), and colonies of cells (e.g., of bacteria). Several classes of computing devices, called *P systems*, were introduced in this framework. Their basic features/ingredients are the following: (1) *the membrane structure*, of a cell-like (hierarchical, described by a tree) or a tissue-like (described by an arbitrary graph) type, defining compartments (also called regions), where (2) *multisets of objects* (i.e., sets with multiplicities associated with their elements) evolve according to given (3) *evolution rules* inspired from the biochemistry of the cell; the objects and the rules are placed in the compartments; the functioning of the model is *distributed*, as imposed by the compartments defined by membranes; and *parallel* evolution rules are applied simultaneously in all regions, to all objects which can evolve.

We will enter immediately into some details. What is important here is to understand membrane computing as a framework for devising computing models (where “computing” is understood in the Turing sense, of an input–output well-defined process, which provides a result after halting) which are distributed and parallel and rather different from the many computing devices known in (theoretical) computer science, dealing with a data structure which is not very common to computer science, but it is fundamental for biology, the multiset. We interpret the chemicals which swim in water in the compartments of a cell, from ions to large macromolecules, as atomic objects (in the etymological sense), identified by symbols in a given alphabet; their multiplicity in a compartment matters, but there is no ordering, and no positional information (like in the strings usually processed in computer science by automata, grammars, rewriting machineries). We ignore at this stage chemicals bound on membranes or on the cytoskeleton (as well as the structure of chemicals), although also such details were taken into consideration in certain variants of P systems. In turn, the rules by which the objects evolve are also inspired by the functioning of the cell; the most investigated are the multiset rewriting rules, similar to the biochemical reactions, but also many other types of rules were considered: abstraction of biological operations such as symport and antiport, membrane division, creation, separation, etc. In neural-like P systems, one uses specific operations, for instance, spiking rules.

At this level, membrane computing is interested in understanding the computing processes taking place in cells, in order to learn something possibly useful for computer science, in the same way as many (actually, most: the only exception is DNA computing, which comes with a different goal, that of using DNA, and other ingredients from biology, as a support for computations) areas of natural computing look to biology in order to improve the use/the efficiency of existing computers. The domain developed very much in this direction—but also well developed are the applications, especially in biology and biomedicine, in ecology, linguistics, computer science, economics, approximate optimization, etc. We will discuss here only some theoretical issues, and we refer the reader to [2, 3, 30] for details about applications.

When introducing a new computability model, the basic theoretical questions concern the *computing power* and the *computing efficiency*, in both cases comparing the new model with standard models and classifications in computer science: the power of Turing machines and of their restrictions and the classes in computational complexity. The equivalence in power with Turing machines is desired from two points of view: according to Turing–Church thesis, this is the maximal power an algorithmic model can achieve, and, moreover, the equivalence with Turing machine also means *programmability* (the existence of universal computing devices in the sense of universal Turing machines). In turn, the efficiency question is expected to have answers indicating a speedup when passing from Turing machines to the new model, if possible, indicating (even only theoretical) ways to solve classically intractable problems (typically, **NP**-complete problems) in a feasible time (typically, polynomial).

Membrane computing provides encouraging answers to both these questions. Most of the classes of P systems are Turing complete, even when using ingredients of a reduced complexity—a small number of membranes, rules of simple forms, and ways of controlling the use of rules directly inspired from biology, while certain classes of P systems are also efficient; they can solve **NP**-complete (even **PSPACE**-complete) problems in a polynomial (often, linear) time; this speedup is obtained by means of a space-time trade-off, with the (exponential) space obtained during the computation, in a linear time, by means of bioinspired operations, such as membrane division, membrane creation, string replication, etc.

For the fifteen years since its beginning, membrane computing is quite well developed at all levels (theory, applications, software), and its bibliography is very large. The reader is advised to look for details, at various levels of developments of this research area, in [25] and in the handbook [30]. A comprehensive source of information (with many papers, PhD theses, and pre-proceedings volumes available for downloading) can be found on the P systems website from [36]. In general, we refer the reader to these bibliographical sources, so that in what follows we only specify a few references.

## 2 Cell-Like P Systems

*cell-like P systems*processing multisets of symbol objects. We discuss first the basic ingredients: membrane structure, multisets, and multiset processing rules.

### 2.1 Membrane Structure

The starting point is the (eukaryotic) cell and its compartmentalization by means of membranes, hierarchically arranged. We represent such a (spatial) structure in the way suggested in Fig. 1. Please notice the intuitive terminology used, the way of defining compartments (“protected reactors,” where specific biochemistry takes place), and the one-to-one correspondence between membranes and compartments. The membranes are usually labeled; thus, we can identify by a label both the membrane and its associated region.

### 2.2 Multisets

In the compartments of a cell, there are various chemicals swimming in water (at this stage, we ignore the chemicals, mainly proteins, bound on membranes, but there are classes of P systems taking them into account). Therefore, the natural data structure to use in this framework is the *multiset*, the set with multiplicities associated with its elements.

Formally, a *multiset* over a given set *U* is a mapping \(M: U\longrightarrow \mathbf{N}\), where **N** is the set of nonnegative integers. For *a* ∈ *U*, *M*(*a*) is the *multiplicity of a in M*. If the set *U* is finite, \(U =\{ a_{1},\ldots,a_{n}\},\) then the multiset *M* can be explicitly given in the form \(\{(a_{1},M(a_{1})),\ldots,(a_{n},M(a_{n}))\}\), thus specifying for each element of *U* its multiplicity in *M*. In membrane computing, the usual way to represent a multiset \(M =\{ (a_{1},M(a_{1})),\ldots,(a_{n},M(a_{n}))\}\) over a finite set \(U =\{ a_{1},\ldots,a_{n}\}\) is by using strings: \(w = a_{1}^{M(a_{1})}a_{2}^{M(a_{2})}\ldots a_{n}^{M(a_{n})}\) and all permutations of *w* represent *M*; the empty multiset is represented by *λ*, the empty string. The total multiplicity of elements of a multiset (this is also called the *weight* of the multiset) clearly corresponds to the length of a string representing it.

A few basic notions about multisets (union, inclusion, difference) are useful in membrane computing, but they are defined in a natural way; hence, we do not recall them here and refer to [1] for details.

### 2.3 Evolution Rules

The main way the chemicals present in the compartments of a cell evolve is by means of biochemical reactions which consume certain chemicals and produce other chemicals. In what follows, we consider the chemicals as unstructured, hence described by symbols from a given alphabet; we call these symbols *objects*. There are classes of P systems which also consider structured objects, especially described by strings, but we refer to the bibliography for details. In each compartment of a cell (of the membrane structure describing it), we have a multiset of objects, maybe the empty one. Corresponding to the biochemical reactions, we get *multiset rewriting rules*.

We write such a rule in the form \(u \rightarrow v\), where *u* and *v* are multisets of objects (represented by strings over a given alphabet). The objects indicated by *u* are consumed and those indicated by *v* are produced. It is important to have in mind that both the objects and the rules are placed in the compartments of the membrane structure. The rules in a given compartment are applied only to objects in the same compartment. In order to make the compartments cooperate, we can move objects across membranes, and this can be achieved by adding *target indications* to the objects produced by a rule *u* → *v*. The indications we use here are *here, in*, and *out*, with the meanings that an object associated with the indication *here* remains in the same region, one associated with the indication *in* goes immediately into an adjacent lower membrane, nondeterministically chosen, and *out* indicates that the object has to exit the membrane, thus becoming an element of the region surrounding it. For instance, we can have *aab* → (*a*, *here*)(*b*, *out*)(*c*, *here*)(*c*, *in*). Using this rule in a given region of a membrane structure means to consume two copies of *a* and one of *b* (they are removed from the multiset of that region), and one copy of *a*, one of *b*, and two of *c* are produced; the resulting copy of *a* remains in the same region, and the same happens with one copy of *c* (indication *here*), while the new copy of *b* exits the membrane, going to the surrounding region (indication *out*), and one of the new copies of *c* enters one of the children membranes, nondeterministically chosen. If no such child membrane exists, that is, the membrane with which the rule is associated is elementary, then the indication *in* cannot be followed, and the rule cannot be applied. In turn, if the rule is applied in the skin region, then *b* will exit into the environment of the system (and it is “lost” there; it can never come back, as there is no rule associated with the environment). In general, the indication *here* is not specified when giving a rule.

The evolution rules are classified according to the complexity of their left hand side. A rule with at least two objects in its left hand side is said to be *cooperative*; a particular case is that of *catalytic* rules, of the form *ca* → *cv*, where *c* is an object (called catalyst) which assists the object *a* to evolve into the multiset *v*; rules of the form *a* → *v*, where *a* is an object, are called *noncooperative*.

Biochemistry suggests various ways to extend the form of the rules, in particular ways to control their application. For instance, we can add *promoters* (objects which should be present in the compartment where the rule is applied) and *inhibitors* (objects which, if present, forbid the use of the rule). A *priority* relation can also be considered as a partial order relation on the set of rules present in a region; in each step, only rules with a maximal priority among the applicable rules can be used. A special ingredient is the membrane *dissolution* action: when applying a rule of the form *u* → *v δ*, besides the replacement of the multiset *u* by the multiset *v*, the membrane where the rule is applied is “dissolved”; its contents, object, and membranes alike, become part of the contents of the surrounding membrane, while its rules disappear with the membrane. The skin membrane is never dissolved.

There are several other types of rules for handling objects and also for handling membranes. The basic ones will be presented later. Now we pass to a crucial point in the definition of P systems: the ways of using the rules. Membranes, objects, and rules constitute the architecture of the computing model (its syntax); it is important to see how this model can function (its semantics).

### 2.4 Ways of Using the Rules

Having in mind the biochemical reality, the rules in a compartment of a membrane structure should be used in a *nondeterministic* (the objects to evolve and the rules by which they evolve are chosen in a nondeterministic manner) and *parallel* way. The parallelism adopted in membrane computing is the *maximal* one. More formally stated, we look to the *set* of rules and the multiset of objects from a given compartment and try to find a *multiset* of rules, by assigning multiplicities to rules, with two properties: (1) the multiset of rules is *applicable* to the multiset of objects available in the respective region, that is, there are enough objects to apply the rules a number of times as indicated by their multiplicities, and (2) the multiset is *maximal*, i.e., no further rule can be added to it (no multiplicity of a rule can be increased) such that the obtained multiset is still applicable.

Thus, an evolution step in a given region consists of finding a maximal applicable multiset of rules, removing from the region all objects specified in the left hand sides of the chosen rules (with multiplicities as indicated by the rules and by the number of times each rule is used), producing the objects from the right hand sides of the rules, and then distributing these objects as indicated by the targets associated with them.

Several alternatives are possible and were investigated in membrane computing: limited parallelism (only a given number of rules should be applied), sequential use of rules (only one at a time in each region), minimal parallelism (if a region can evolve, then at least one rule is used there), and asynchronous use of rules (no clock is assumed at the level of the system).

We continue here only with the maximal parallelism, and we define now *computations*, which are sequences of *transitions* between *configurations* of the P system, defined as above, by the maximally parallel nondeterministic use of rules in each configuration. Similar to Turing machines, we consider that a computation is successful if and only if it halts; it reaches a configuration where no rule can be applied to the existing objects. With a halting computation, we can associate a *result* in various ways. One possibility is to count the objects present in the halting configuration in a specified elementary membrane (*internal output*), or we can count the objects which leave the system during the computation (*external output*). In both cases the result is a number.

Starting from an initial configuration of the system and proceeding through transitions, because of the nondeterminism of the application of rules, we can get several halting computations, hence several results. In this way, a P system *computes* (or *generates*) a set of numbers. This corresponds to grammars in formal language theory. We can also use the system in the *accepting* mode, corresponding to automata: we start from the initial configuration, where some objects codify an input, and we accept that input if and only if the computation halts. A more general case is that of *computing* a function: we start with the argument introduced in the initial configuration, and we obtain the value of the function in the end of the computation.

### 2.5 A Formal Definition of a P System

*O*is the alphabet of objects,

*C*⊆

*O*is the set of catalysts,

*μ*is the membrane structure (with

*m*membranes), given as an expression of labeled parentheses, \(w_{1},\ldots,w_{m}\) are strings over

*O*representing multisets of objects present in the

*m*regions of

*μ*at the beginning of a computation, \(R_{1},\ldots,R_{m}\) are finite sets of evolution rules associated with the regions of

*μ*, and

*i*

_{in},

*i*

_{out}are the labels of input and output regions, respectively;

*i*

_{out}can be the environment, denoted

*env*. If the system is used in the generative mode, then

*i*

_{in}is omitted, and if the system is used in the accepting mode, then

*i*

_{out}is omitted. The number

*m*of membranes in

*μ*is called the

*degree*of

*Π*.

In the generative case, the set of numbers computed by *Π* (in the maximally parallel nondeterministic mode) is denoted by *N*(*Π*). The family of all sets *N*(*Π*) computed by systems *Π* of degree at most *m* ≥ 1 and using rules of *α* forms is denoted by *NOP*_{m}(*α*); if there is no bound on the degree of systems, then the subscript *m* is replaced with ∗. According to the previous classification, *α* ∈ {*ncoo*, *cat*, *coo*}, with the obvious meaning.

### 2.6 A Simple Example

*n*≥ 1: any number

*n*is introduced in the system in the form of

*n*copies of the object

*a*placed in region 2, and then the computation can start. It proceeds as follows.

The only rule to be applied is \(a \rightarrow b_{1}b_{2}\) in region 2. Because of the maximal parallelism, it has to be applied simultaneously to all copies of *a*. Thus, in one step, all *n* objects *a* are replaced by *n* copies of *b*_{1} and *n* copies of *b*_{2}. From now on, the other rules from region 2 can be used. The catalytic rule \(cb_{1} \rightarrow cb_{1}^{{\prime}}\) can be used only once in each step, because the catalyst is present in only one copy. This means that in each step one copy of *b*_{1} gets primed. Simultaneously (because of the maximal parallelism), the rule *b*_{2} → *b*_{2}*e* should be applied as many times as possible, and this means *n* times, because we have *n* copies of *b*_{2}. In this way, in each step we change one *b*_{1} to *b*_{1}^{′} and we produce *n* copies of *e* (one for each copy of *b*_{2}). The computation should continue in region 2 (note that no rule can be applied in the skin region) as long as there are applicable rules. At any step, instead of *cb*_{1} → *cb*_{1}^{′}, in region 2 we can use the rule *cb*_{1} → *cb*_{1}^{′}*δ*, which replaces one *b*_{1} by *b*_{1}^{′} but also dissolves membrane 2. All objects in region 2 are left free in region 1. If at least one object *b*_{1} exists, then the computation will continue forever by means of the rule *b*_{1} → *b*_{1} from region 1; hence, no result is obtained. Conversely, as long as the rule *cb*_{1} → *cb*_{1}^{′}*δ* is not used, the rule *b*_{2} → *b*_{2}*e* in region 2 should be used; hence, again the computation is non-successful; it lasts forever if no copy of *b*_{1} still exists. This means that the rule *cb*_{1} → *cb*_{1}^{′}*δ* should be used, and this must be done in the moment when the last object *b*_{1} is consumed, replaced by *b*_{1}^{′}. Consequently, a catalytic rule is applied exactly *n* steps, simultaneously with using *n* times in parallel the rule *b*_{2} → *b*_{2}*e*. In this way, *n*^{2} copies of *e* are introduced in region 2 and left free in the skin region after dissolving membrane 2. The rule *e* → *e*_{out} will send immediately all these copies of *e* to the environment, and the computation halts. The result is the desired one, *n*^{2}.

*n*copies of

*a*to region 2, for all

*n*. For instance, we can add a further membrane, with label 3, inside membrane 2, containing initially a copy of an object

*d*and two rules:

*m*≥ 0 copies of

*a*by means of the rule

*d*→

*da*, one introduces one further copy of

*a*and one dissolves membrane 3 (rule

*d*→

*a δ*). From now on, the computation continues as above. For the modified system

*Π*

^{′}, we get

*N*(

*Π*

^{′}) = {

*n*

^{2}∣

*n*≥ 1}.

## 3 The Power of Catalytic P Systems

We have mentioned already that many classes of P systems are equivalent in power with Turing machines. This is not a surprise for cooperating P systems, but it is unexpected for catalytic P systems. Moreover, the number of catalysts sufficient for obtaining the computational completeness is also rather reduced, too.

Let us denote by *NP*_{m}(*cat*_{r}) the family of sets of numbers *N*(*Π*) computed (generated) by P systems with at most *m* membranes, using catalytic or noncooperative rules, containing at most *r* catalysts. When all the rules of a system are catalytic, we say that the system is *purely catalytic*, and the corresponding families of sets of numbers are denoted by *NP*_{m}(*pcat*_{r}). When the number of membranes is not bounded by a specified *m* (it can be arbitrarily large), then the subscript *m* is replaced with ∗.

We denote by *NRE* the family of recursively enumerable (i.e., Turing computable) sets of natural numbers and by *NREG* the family of semilinear sets of natural numbers (they are the length sets of Chomsky regular languages, hence the notation).

The following fundamental results are known:

### Theorem 1

Two resisting *open problems* appear here, related to the borderline between universality and non-universality: (1) are catalytic P systems with only one catalyst universal? (2) are purely catalytic P systems with two catalysts universal? The conjecture is that both these questions have a negative answer, but it is also felt that “one catalyst is almost universal”: adding to P systems with one catalyst various features which, at the first sight, look weak enough, we already obtain the universality (see [8]); similar results were obtained also for purely catalytic P systems with two catalysts (see [5]).

Introducing a

*priority*relation among rules [22].Using

*promoters*and*inhibitors*associated with the rules.Controlling the computation by means of controlling the

*membrane permeability*, by actions*δ*(decreasing the permeability) and*τ*(increasing the permeability) [23].Besides catalytic and non-cooperating rules, also using rules for

*membrane creation*, [19].Considering, instead of usual catalysts,

*bi-stable catalysts*, [31], or*mobile catalysts*, [14].Imposing

*target restrictions*on the used rules, [8]; the universality was obtained for P systems with 7 membranes, and it is an open problem whether or not the number of membranes can be diminished.Imposing to P systems the idea from

*time-varying*grammars and splicing systems, [8]; the universality of time-varying P systems is obtained for one catalyst P systems with only one membrane, having the period equal to 6, and it is an open question whether the period can be decreased.Using in a transition only (labeled) rules with the same label—so-called

*label restricted*P systems [15].

Several of these results were extended in [5] to purely catalytic P systems with two catalysts. It remains open to do this for all the previous results, as well as to look for further ingredients which, added to one catalyst P systems or to purely catalytic P systems with two catalysts, can lead to universality. It would be interesting to find such ingredients which work for one catalyst systems and not for purely catalytic systems with two catalysts, and conversely.

We end this section with a somewhat surprising issue: we know that *NP*_{2}(*cat*_{2}) = *NRE*, but no example of a P system with two catalysts which generates a nontrivial set of numbers (for instance, \(\{2^{n}\mid n \geq 1\},\{n^{2}\mid n \geq 1\}\)) is known. In fact, the problem is to find a system of this kind as simple as possible (otherwise, just repeating the construction in the proof from [7], starting from a register machine computing a set as above, we get an example, but of a large size). A first answer to this question is given in [33], where a catalytic P system with 54 rules is produced, but it is expected that this number could be reduced.

## 4 Efficiency: P Systems with Active Membranes

We proceed now to the second main question to investigate for any new computing model: the efficiency. We have mentioned that for P systems as considered above, the so-called Milano theorem was proved in [35]: such systems can be simulated in polynomial time by means of Turing machines; hence (assuming that **P** ≠ **NP**, as one usually expects), these systems cannot solve **NP**-complete problems in polynomial time. This is somewhat surprising because even noncooperative P systems can generate exponentially many objects in linear time: just consider a rule of the form *a* → *aa*; because of the maximal parallelism, in *n* steps we get 2^{n} copies of object *a*. This exponential workspace does not help, and the intuition is that this happens because this space is not structured; the same rules are applied to all exponentially many objects. The situation changes if further membranes can be created, thus introducing a structure in the set of objects.

*P systems with active membranes*, introduced in [24], which are constructs of the form

*H*is a finite set of

*labels*for membranes;

*μ*is a membrane structure of degree

*m*≥ 1, with polarizations associated with membranes, that is, “electrical charges” \(\{+,-,0\}\); and

*R*is a finite set of rules, of the following forms:

- (a)
\({{[_{}}_{}}_{h}a \rightarrow v{{]_{}}_{}}_{h}^e\), for \(h \in H,e \in \{ +,-,0\},a \in O,v \in O^{{\ast}}\)

(object evolution rules)

- (b)
\(a{{[_{}}_{}}_{h}\ {{]_{}}_{}}_{h}^{e_1} \rightarrow {{[_{}}_{}}_{h} b{{]_{}}_{}}_{h}^{e_2}\), for \(h \in H,e_{1},e_{2} \in \{ +,-,0\},a,b \in O\)

(

*in*communication rules) - (c)
\({{[_{}}_{}}_{h}a\ {{]_{}}_{}}_{h}^{e_1} \rightarrow {{[_{}}_{}}_{h}\ {{]_{}}_{}}_{h}^{e_2}b\), for \(h \in H,e_{1},e_{2} \in \{ +,-,0\},a,b \in O\)

(

*out*communication rules) - (d)
\({{[_{}}_{}}_{h}a\ {{]_{}}_{}}_{h}^{e} \rightarrow b\), for \(h \in H,e \in \{ +,-,0\},a,b \in O\)

(dissolving rules)

- (e)
\({{[_{}}_{}}_{h}a\ {{]_{}}_{}}_{h}^{e_1} \rightarrow {{[_{}}_{}}_{h}b\ {{]_{}}_{}}_{h}^{e_2}{{[_{}}_{}}_{h}c\ {{]_{}}_{}}_{h}^{e_3}\), for \(h \in H,e_{1},e_{2},e_{3} \in \{ +,-,0\},a,b,c \in O\)

(division rules for elementary membranes; in reaction with an object, the membrane is divided into two membranes with the same label, and possibly of different polarizations; the object specified in the rule is replaced in the two new membranes possibly by new objects; the remaining objects are duplicated and may evolve in the same step by rules of type (

*a*)).

Note that each rule has specified the membrane where it is applied; the membrane is part of the rule; that is why we consider a global set of rules, *R*. The rules are applied in the maximally parallel manner, with the following details: a membrane can be subject of only one rule of types (b)–(e); inside each membrane, the rules of type (a) are applied in parallel; each copy of an object is used by only one rule of any type. The rules are used in a bottom-up manner: we use first the rules of type (a) and then the rules of other types; in this way, in the case of dividing membranes, the result of using first the rules of type (a) is duplicated in the newly obtained membranes. As usual, only halting computations give a result, in the form of the number of objects expelled into the environment during the computation.

Several generalizations are possible. For instance, a division rule can also change the labels of the involved membranes:

(

*e*^{′}) \({{[_{}}_{}}_{h_{1}}a\ {{]_{}}_{}}_{h_1}^{e_1} \rightarrow {{[_{}}_{}}_{h_{ 2}}b\ {{]_{}}_{}}_{h_2}^{e_2}{{[_{}}_{}}_{ h_3}c\ {{]_{}}_{}}_{h_3}^{e_3}\),for \(h_{1},h_{2},h_{3} \in H,e_{1},e_{2},e_{3} \in \{ +,-,0\},a,b,c \in O\).

The change of labels can also be considered for rules of types (b) and (c). Also, we can consider the possibility of dividing membranes in more than two copies or even of dividing nonelementary membranes (in such a case, all inner membranes are duplicated in the new copies of the membrane).

P systems with active membranes can be used for computing numbers, in the usual way, but the main usefulness of them is in devising polynomial time solutions to computationally hard problems by a time-space trade-off. The space is created both by duplicating objects and by dividing membranes. An encoding of an instance of a decidability problem is introduced in the initial configuration of the P system (in the form of a multiset of objects), the computation proceeds, and if it halts and a special object yes is sent to the environment, then the respective instance has the positive answer.

A large research area starts at this point. The first important step is to formally define complexity classes for P systems with active membranes. This has been already done since several years—we refer to [32] and to its references. The respective classes refer to a parallel computing time, which also covers the steps for producing the exponential space necessary to the computation. Of course, very important are the ingredients used by the considered P systems: using or not using a specified type of rules (for instance, membrane dissolution rules)? how many polarizations, three (as in the initial definition) or a smaller number? how much time is allowed for constructing the system(s) which will solve a given decidability problem?

An intriguing question appears here. In classic computational complexity, a problem *Q* is solved in the *uniform* way: we have to start from *Q* (and its size parameters) when constructing the algorithm *A*_{Q} which solves *Q* and not from instances \(Q(1),Q(2),\ldots\) of *Q*; *A*_{Q} depends on *Q*, while encodings of instances *Q*(*i*) are introduced in *A*_{Q} and their answer is provided. Starting from an instance *Q*(*i*) and building an algorithm *A*_{Q(i)} for solving that instance do not look fair; we can solve *Q*(*i*) during “programming” *A*_{Q(i)} and then provide the answer in a time which is not the correct one.

Interesting enough, in many experiments in DNA computing, one however proceeds in this nonuniform way, constructing the “wet computer” starting from an instance of the problem, not from the problem itself. This can be accepted also in membrane computing and even in complexity theory, provided that the time for constructing *A*_{Q(i)} is carefully limited. If this happens, then we cannot cheat “too much,” working on solving the problem itself during the programming phase. We call *semi-uniform* a solution to a problem *Q* obtained in this way. (Again, formal definitions can be found in the literature, e.g., in [32].) The problem now is natural: which is the relation between uniform and semi-uniform complexity classes? (Surprisingly enough, the classic complexity theory seems to not have examined this natural question.) When a problem can be solved in a semi-uniform way, can it be also solved (in the same amount of time) in the uniform way? (In many cases, in membrane computing, uniform and semi-uniform families coincide—but not in all cases! Moreover, while initially semi-uniform solutions were reported to various problems, nowadays only uniform solutions are considered acceptable.)

Another “nonstandard” question concerns the possibility to use *precomputed resources*, and the suggestion comes again from biology. It is known that the brain contains a huge number of neurons, but in each moment only a small part of them seem to be active. The same is known for the liver, which in each moment uses only part of its cells, depending on the task it has to cope with. Can this idea be also used in computability? Roughly speaking, we can assume as given “for free” (we do not care which is the time for constructing it) a computing device which is initially “arbitrarily large,” but it contains only a limited amount of information; a decidability problem is introduced in this initial configuration, the information spreads across the arbitrarily large workspace, and the answer is provided in a specified way. The strategy is rather natural; it can be useful in many circumstances where we have “enough” time before the computation itself (this can be the case, e.g., in cryptography); hence, we can prepare in advance an arbitrarily large “computer,” without too much data inside, which is fed with the problem to solve in the moment when the problem appears. Still, this way of solving problems is not yet investigated in complexity theory. Actually, neither in membrane computing we do not have a formal framework for this approach, although this strategy has been used several times in solving computationally hard problems in a polynomial time (especially in cases when the exponential workspace cannot be produced during the computation).

Another technical detail, specific to membrane computing, is the fact that, in their general form, the P systems are nondeterministic computing devices. However, when solving a problem, we want to have a deterministic solution. This difficulty can be solved either by working only with deterministic P systems or, more adequately/realistic, by leaving the system to be nondeterministic, but asking to have a behavior which guarantees that the solution is the “real” one. This can be achieved if we can ensure that the system is *confluent*: the computation proceeds nondeterministically, but either eventually all computations reach the same configuration (strong confluence) and after that the computation continues in a deterministic way, or the computation is nondeterministic as a whole, but all computations halt and provide the same answer (weak confluence).

The complexity investigations are among the most active in membrane computing at this moment. Besides membrane division, membrane creation (by means of rules of the form \(a \rightarrow {{[_{}}_{}}_{h}b{{]_{}}_{}}_{h}\), where *a*, *b* are objects and *h* is a label), string replication and other operations were used. The reader is referred to the complexity chapter from [30] and to the literature available at [36] for details—including many problems which are still open in this area.

## 5 Other Important Classes of P Systems

We briefly discuss now three important classes of P systems, with fundamental differences with respect to the P systems considered in the previous sections. For two of them (symport/antiport and spiking neural P systems) the motivation comes from biology; the third class (numerical P systems) has a motivation related to economics. Each of these variants of P systems gave birth to a strong branch of membrane computing, still with many open problems and research topics waiting to be addressed. (Numerical P systems also have a surprising area of applications—robot control. We strongly believe that these systems can find applications in many other areas where functions of several variables should be computed in an efficient way.)

We do not introduce here the tissue-like P systems, although their study is well developed, both in what concerns the theory (power and efficiency) and applications: the more general graph structure describing the arrangement of membranes/cells can cover more phenomena than the cell-like structure.

### 5.1 Symport/Antiport P Systems

In the functioning of the cell, one of the most interesting (and important) ways to pass chemicals across membranes is by means of protein channels, which can select in various ways the transported chemicals. We consider here the coupled passage of chemicals through protein channels, the operations called *symport* (two—or more—chemicals pass together across a membrane, in the same direction) and *antiport* (the case when the chemicals move in opposite directions).

We can formalize these operations by considering symport rules of the form (*x*, *in*) and (*x*, *out*) and antiport rules of the form (*z*, *out*; *w*, *in*), where *x*, *z*, and *w* are multisets of arbitrary size; one says that the length of *x*, denoted | *x* | , is the *weight* of the symport rule and max( | *z* | , | *w* | ) is the *weight* of the antiport rule. Such rules just move objects across membranes, but they can replace the multiset rewriting rules considered in the previous sections, and we get in this way a class of P systems which are again computationally universal, equivalent in power with Turing machines.

*P system with symport/antiport rules*is a construct of the form

*E*⊆

*O*, and \(R_{1},\ldots,R_{m}\) are finite sets of symport/antiport rules associated with the

*m*membranes of

*μ*. The objects of

*E*are supposed to be present in the environment of the system with an arbitrary multiplicity.

We define transitions, computations, and halting computations in the usual way, making use of the nondeterministic maximally parallel mode of applying the rules. A system can be used in the generating, accepting, or computing mode.

*a*are assumed to be present in the environment.

We introduce a number *n* of copies of object *a* in region 1. By using the rule (*aa*, *out*; *a*, *in*) ∈ *R*_{1} in a maximally parallel way, this number is repeatedly divided by 2, with a remainder in the case the reached number is odd. If a remainder exists, then the rule (*a*, *in*) ∈ *R*_{2} must be used. We finish the halving of objects in region 1 with only one copy of *a* in region 2 if and only if *n* is of the form 2^{k}, for some *k* ≥ 0; otherwise at least two copies of *a* arrive to region 2. In the latter case, the rules of *R*_{3} can be used forever; hence, the computation never halts. Consequently, the set of numbers accepted by the system *Π* is \(\{2^{k}\mid k \geq 0\}\).

Symport/antiport P systems (with reduced weights) are universal—we refer to [30] for details.

Not explored are the computational complexity properties of these systems; note that in the previous form, symport/antiport P systems do not have the possibility of dividing membranes (or other similar operations useful for generating an exponential workspace in a linear time), but such operations can be introduced and then “fypercomputations” are expected, in the sense of [26].

## 6 Spiking Neural P Systems

Spiking neural P systems (SN P systems) have a completely different architecture and functioning, as they start not from the cell but from the brain biology. Actually, we only consider here the neuron cooperation by means of spikes, a feature also much investigated in the neural computing (see, e.g., [18]). We do not define formally the SN P systems, but we only describe informally such a system, followed by an example.

In short, an SN P system consists of a set of *neurons* (represented by membranes) placed in the nodes of a directed graph (the arcs are called *synapses*) and containing *spikes*, denoted by the symbol *a*. Thus, the architecture is that of a tissue-like P system, with only one kind of objects present in the cells. The objects evolve by means of *spiking rules*, which are of the form *E*∕*a*^{c} → *a*; *d*, where *E* is a regular expression over {*a*} and *c*, *d* are natural numbers, *c* ≥ 1, *d* ≥ 0. The meaning is that a neuron containing *k* spikes such that *a*^{k} ∈ *L*(*E*), *k* ≥ *c*, can consume *c* spikes and produce one spike, after a delay of *d* steps. This spike is sent to all neurons to which a synapse exists outgoing from the neuron where the rule was applied. There also are *forgetting rules*, of the form *a*^{s} → *λ*, with the meaning that *s* ≥ 1 spikes are removed, provided that the neuron contains exactly *s* spikes. The system works in a synchronized manner, i.e., in each time unit, each neuron which can use a rule should do it, but the work of the system is sequential in each neuron: only (at most) one rule is used in each neuron. One of the neurons is considered to be the *output* one, and its spikes are also sent to the environment. The moments of time when a spike is emitted by the output neuron are marked with 1; the other moments are marked with 0. This binary sequence is called the *spike train* of the system—it might be infinite if the computation does not stop.

The result of a computation is encoded in the distance between the first two spikes sent into the environment by the (output neuron of the) system. Other ways to associate a result with a computation were considered; the spike train itself can be taken as the result of the computation, and in this way the system generates a binary sequence (a finite string, if the computation halts).

*E*∕

*a*

^{c}→

*a*;

*d*with

*L*(

*E*) =

*a*

^{c}are written in the simplified way

*a*

^{c}→

*a*;

*d*.)

The neuron *out* spikes in step 1 by means of the rule *a*^{3} → *a*; 0. All its spikes are consumed. The spike emitted goes immediately to the environment and to neuron 1. The spike goes along the path \(1,2,\ldots,n - 2\) and gets doubled when passing from neuron *n* − 2 to neurons *n* − 1 and 0. Both these last neurons get fired. As long as neurons 0 and *n* − 1 spike in different moments (because neuron 0 can use either of its rules, hence also the one with delay), no further spike exits the system (neuron *out* gets only one spike and forgets it immediately), and one passes along the cycle of neurons \(1,2,\ldots,n - 1,n\) again and again. If neurons 0 and *n* − 1 spike at the same time (neuron 0 uses the rule *a* → *a*; 0), then the system spikes again—hence in a moment of the form *ni*, *i* ≥ 1. The spike of neuron *out* arrives at the same time in neuron 1 with the spike of neuron *n*, and this halts the computation, because of the rule *a*^{2} → *λ*, which consumes the spikes present in the system. Consequently, the system computes the arithmetical progression {*ni*∣*i* ≥ 1}.

There are several classes of SN P systems using various combinations of ingredients (rules of restricted forms, e.g., without a delay, without forgetting rules, or extended rules, e.g., producing more than one spike), as well as asynchronous SN P systems (no clock is considered; any neuron may use or not use a rule), with exhaustive use of rules (when enabled, a rule is used as many times as made possible by the spikes present in a neuron), with certain further conditions imposed to the halting configuration, etc. For most SN P systems with unbounded neurons (arbitrarily many spikes can be found in each of them), characterizations of Turing computable sets of natural numbers are obtained. When the neurons are bounded, characterizations of the family *NREG* are usually obtained. SN P systems can also be used in the accepting and the computing ways.

To further investigate the power and the properties of SN dP systems, that is, to combine the idea of distributed P systems introduced in [29] with that of spiking neural P systems. The dP systems consider “systems of P systems,” each one with its own input, in the form of a string, communicating among them by means of antiport rules; the concatenation of the input strings is accepted if the whole system eventually halts. Note that we have here an explicit splitting of the input task, in the form of the strings “read” from the environment by each component, then an overall computation, with the cooperation of all components/modules. There are cases when this strategy can speed up the recognition of the input string. SN dP systems were already introduced in [13], but only briefly investigated.

To investigate the possibility of using SN P systems as pattern recognition devices, in general, in handling 2D patterns. One of the ideas is to consider a layer of input neurons which can read an array line by line, and the array is recognized if and only if the computation halts.

## 7 Numerical P Systems

This class of P systems looks somewhat “exotic” in the framework of membrane computing. It takes only the membrane structure from cell-like P systems, but, instead of multisets of objects, uses numerical variables placed in compartments and evolving according to *production-repartition programs* inspired from economics.

*F*is a function of

*k*variables, \(x_{1,i},\ldots,x_{k,i}\) are (part of the) variables in region

*i*, \(c_{1},\ldots,c_{n}\) are natural numbers, and \(v_{1},v_{2},\ldots,v_{n}\) are variables from region

*i*and from the parent and the children regions. The idea is that using the function

*F*, one computes “the production” of region

*i*at a given time, and this production is distributed to variables \(v_{1},v_{2},\ldots,v_{n}\) proportionally with the coefficients \(c_{1},\ldots,c_{n}\). More formally, let

*t*≥ 0, we compute \(F(x_{1,i}(t),\ldots,x_{k,i}(t))\). The value

*q*=

*F*(

*x*

_{1, i}(

*t*), …,

*x*

_{k, i}(

*t*))∕

*C*represents the “unitary portion” to be distributed according to the repartition expression to variables \(v_{1},\ldots,v_{n}\). Thus,

*v*

_{s}will receive

*q*⋅

*c*

_{s}, 1 ≤

*s*≤

*n*.

A production function may use only part of the variables from a region. Those variables “consume” their values when the production function is used (they become zero)—the other variables retain their values. To these values—zero in the case of variables contributing to the region production—one adds all “contributions” received from the neighboring regions.

*numerical P system*is a construct of the form

*μ*is a membrane structure with

*m*membranes labeled injectively by \(1,2,\ldots,m\),

*Var*

_{i}is the set of variables from region

*i*,

*Pr*

_{i}is the set of programs from region

*i*(all sets

*Var*

_{i},

*Pr*

_{i}are finite),

*Var*

_{i}(0) is the vector of initial values for the variables in region

*i*, and \(x_{j_{0},i_{0}}\) is a distinguished variable (from a distinguished region

*i*

_{0}), which provides the result of a computation.

Such a system evolves in the way informally described before. Initially, the variables have the values specified by *Var*_{i}(0), 1 ≤ *i* ≤ *m*. A transition from a configuration at time *t* to a configuration at time *t* + 1 is made by (1) choosing nondeterministically one program from each region, (2) computing the value of the respective production function for the values of local variables at time *t*, and then (3) computing the values of variables at time *t* + 1 as directed by repartition protocols. A sequence of such transitions forms a computation, with which we associate a set of numbers, namely, the numbers which occur as positive values of the variable \(x_{j_{0},i_{0}}\); this set of numbers is denoted by *N*^{+}(*Π*). If all numbers, positive or negative, are taken into consideration, then we write *N*(*Π*).

*x*

_{1, 1}. One can easily see that variable

*x*

_{1, 3}increases by 1 at each step, also transmitting its value to

*x*

_{1, 2}. In turn, region 2 transmits the value 2

*x*

_{1, 2}+ 1 to

*x*

_{1, 1}, which is never consumed; hence, its value increases continuously. In the initial configuration all variables are set to 0. Thus,

*x*

_{1, 1}starts from 0 and continuously receives 2

*i*+ 1, for \(i = 0,1,2,3,\ldots\), which implies that in

*n*≥ 1 steps the value of

*x*

_{1, 1}becomes \(\sum _{i=0}^{n-1}(2i + 1) = n^{2}\), and consequently \(N(\varPi ) =\{ n^{2}\mid n \geq 0\}\).

We denote by \(\mathit{NN}^{+}P_{m}(\mathit{poly}^{n}(r),\mathit{div})\) the family of sets *N*^{+}(*Π*) generated by numerical P systems with *m* ≥ 1 membranes, using polynomials of degree *n* ≥ 0 with at most *r* ≥ 0 variables as production functions; *div* indicates the fact that we consider only systems whose programs have the property that the production at any moment, \(F(x_{1,i}(t),\ldots,x_{k,i}(t))\), is divisible by the sum of repartition coefficients, *C*. Variants can be considered, e.g., with the remainder being lost, or carried to the next production in that region. If the system is deterministic, we add D in front of the notation. Any parameter which is not bounded is replaced with ∗.

Somewhat expected, numerical P systems are computationally complete [28]:

### Theorem 2

\(\mathit{NRE} = \mathit{NN}^{+}P_{8}(pol^{5}(5),\mathit{div}) = \mathit{NN}^{+}P_{7}(\mathit{poly}^{5}(6),\mathit{div})\)*.*

*enzymatic*programs, of the form

*e*

_{j, i}is a variable from

*Var*

_{i}different from \(x_{1,i},\ldots,x_{k,i}\) and from \(v_{1},\ldots,v_{n}\). Such a program is applicable at a time

*t*only if \(e_{j,i}(t) >\min (x_{1,i}(t),\ldots,x_{k,i}(t))\).

Using enzymes helps (in particular, the selection of positive values of the output variable can be done inside the system, not imposed as an external condition):

### Theorem 3

*NRE = NNP*_{7}*(poly*^{5}*(5),enz).*

Very important are the ways to define the transitions in a parallel way, using more programs at a time. Two possibilities were considered: (1) all programs in a region (which can be applied, under enzymes control) are applied, but each variable is used only once (this is called the *oneP* mode), and (2) as above, with each variable appearing as many times as necessary (denoted *allP*).

The following results were obtained in [16]:

### Theorem 4

\(\mathit{NRE} = \mathit{NNP}_{1}(\mathit{poly}^{1}(1),\mathit{enz},\mathit{allP}) = \mathit{NNP}_{1}(\mathit{poly}^{1}(1),\mathit{enz}\)*, oneP).*

Natural problems appear in this context: What about the nonenzymatic systems? Improve the parameters in Theorem 2, for the sequential case.

*L*⊆ { 0, 1}

^{∗}is decided by a numerical P system

*Π*in polynomial time if

*Π*contains two variables

*acc*,

*rej*, and after introducing the number 1

*x*(for all

*x*∈

*L*) in a specified variable,

*Π*halts in

*O*( |

*x*|

^{k}) time, and:

If \(x \in L\), then \(\mathit{acc} = 1,\mathit{rej} = 0\).

If \(x\notin L\), then \(\mathit{acc} = 0,\mathit{rej} = 1\).

We denote by **P-ENP**\((X),X \subseteq \{ +,-,\times,\div \}\), the corresponding complexity class, when using enzymatic numerical P systems working in the *allP* mode; the set *X* indicates the operations used in the production functions. The following characterizations of **P** and **PSPACE** were obtained in [17]:

### Theorem 5

(i) * P-ENP*\((+,-) = \mathbf{P}\), (ii)

*\((+,-,\times,\div ) = \mathbf{PSPACE}\)*

**P-ENP***.*

Also in this case there are several questions which remain to be investigated: what about sequential systems, about systems working in the *oneP* mode, about nonenzymatic systems?

All these results (universality and efficiency—when using all four arithmetical operations) look very promising from the application point of view, while the robot control case is also encouraging. Numerical P systems deserve further research efforts.

## 8 Closing Remarks

This chapter has only presented some basic facts (notions and results) of membrane computing, at a rather informal level, always pointing also to research topics which wait to be investigated. Technical details and comprehensive lists of open problems can be found in the domain literature, in particular through the membrane computing website at [36]. From the point of view of application, we find particularly interesting the computational complexity results: P systems of various types can solve hard problems in a feasible time (due to the inherent massive parallelism, distribution, possibility of creating an exponential workspace in a linear time, by means of operations directly inspired from biology). It is important however to note that at this time there is no laboratory implementation of a P system. In exchange, there are many software products, helping to simulate P systems on usual computers, on grids and networks, on parallel hardware (such as GPU—Graphics Processing Units), and on other electronic supports. Based on such software and implementations, significant applications were reported, especially in biology, biomedicine, ecology, and approximate optimization. The domain is fast evolving, so that the reader interested in this research area is advised to watch progress through the mentioned website or to keep in touch with the membrane computing community.

### References

- 1.C.S. Calude, Gh. Păun, G. Rozenberg, A. Salomaa (eds.)
*Multiset Processing. Mathematical, Computer Science, and Molecular Computing Points of View*. LNCS, vol. 2235 (Springer, Berlin, 2001) The first international meeting devoted to membrane computing was organized already in the summer of 2000, in Curtea de Argeş, Romania, and it was concerned both with the developments in the emerging research area of membrane computing and with the mathematical and computer science investigations of multisets. This LNCS volume is the proceedings of the workshop, edited after the meeting.Google Scholar - 2.G. Ciobanu, Gh. Păun, M.J. Pérez-Jiménez (eds.)
*Applications of Membrane Computing*(Springer, Berlin, 2006) The volume presents several classes of applications (in biology and biomedicine, computer science, linguistics), as well as the software available at the time of editing the book, and a selective bibliography of membrane computing. Here are the sections of the chapter*Computer Science Applications*: Static sorting P systems; Membrane-based devices used in computer graphics; An analysis of a public key protocol with membranes; Membrane algorithms: approximate algorithms for NP-complete optimization problems; and Computationally hard problems addressed through P systems.Google Scholar - 3.P. Frisco, M. Gheorghe, M.J. Pérez-Jiménez (eds.)
*Applications of Membrane Computing in Systems and Synthetic Biology*. (Springer, Berlin, 2014) Different from volume [2], this time only applications in biology and biomedicine are concerned, at the level of year 2013, with a detailed biological motivation, in most cases reporting research done in interdisciplinary teams, including both biologists and computer scientists.Google Scholar - 4.R. Freund, Particular results for variants of P systems with one catalyst in one membrane, in
*Proc. Fourth Brainstorming Week on Membrane Computing*, vol. II (Fénix Editora, Sevilla, 2006), pp. 41–50Google Scholar - 5.R. Freund, Purely catalytic P systems: Two catalysts can be sufficient for computational completeness, in
*Proc. 14th Intern. Conf. on Membrane Computing*(Chişinău, Moldova, 2013), pp. 153–166Google Scholar - 6.R. Freund, O.H. Ibarra, A. Păun, P. Sosík, H.-C. Yen, Catalytic P systems. Chapter 4 of [30]Google Scholar
- 7.R. Freund, L. Kari, M. Oswald, P. Sosík, Computationally universal P systems without priorities: two catalysts are sufficient. Theor. Comput. Sci.
**330**, 251–266 (2005) After a series of previous papers, the first one, by P. Sosík, where the universality of catalytic P systems was proved for systems with 8, then 6, catalysts, this paper established the best result in this respect: two catalysts suffice.Google Scholar - 8.R. Freund, Gh. Păun, Universal P systems: One catalyst can be sufficient, in
*Proc. 11th Brainstorming Week on Membrane Computing*(Fénix Editora, Sevilla, 2013), pp. 81–96Google Scholar - 9.M. Gheorghe, Gh. Păun, M.J. Pérez-Jiménez, G. Rozenberg, Frontiers of membrane computing: Open problems and research topics. Int. J. Found. Comput. Sci.
**24**(5), 547–623 (2013) (first version in*Proc. Tenth Brainstorming Week on Membrane Computing*, vol. I (Sevilla, 2012), pp. 171–249, January 30–February 3) This paper circulated in the membrane computing community in the brainstorming version under the title of “mega-paper.” In this form, it contains 26 sections, written by separate authors, covering most of the branches of this research area and presenting open problems and research topics of current interest. The titles of these 26 sections are worth recalling: A glimpse to membrane computing; Some general issues; The power of small numbers; Polymorphic P systems; P colonies and dP automata; Spiking neural P systems; Control words associated with P systems; Speeding up P automata; Space complexity and the power of elementary membrane division; The P-conjecture and hierarchies; Seeking sharper frontiers of efficiency in tissue P systems; Time-free solutions to hard computational problems; Fypercomputations; Numerical P systems; P systems formal verification and testing; Causality, semantics, behavior; Kernel P systems; Bridging P and R; P systems and evolutionary computing interactions; Metabolic P systems; Unraveling oscillating structures by means of P systems; Simulating cells using P systems; P systems for computational systems and synthetic biology; Biologically plausible applications of spiking neural P systems for an explanation of brain cognitive functions; Computer vision; and Open problems on simulation of membrane computing models.Google Scholar - 10.O.H. Ibarra, Z. Dang, O. Egecioglu, Catalytic P systems, semilinear sets, and vector addition systems. Theor. Comput. Sci.
**312**, 379–399 (2004)CrossRefMATHMathSciNetGoogle Scholar - 11.O.H. Ibarra, Z. Dang, O. Egecioglu, G. Saxena, Characterizations of catalytic membrane computing systems, in
*28th Intern. Symp. Math. Found. Computer Sci.*, ed. by B. Rovan, P. Vojtás. LNCS, vol. 2747 (Springer, 2003), pp. 480–489Google Scholar - 12.M. Ionescu, Gh. Păun, T. Yokomori, Spiking neural P systems. Fundamenta Informaticae
**71**(2–3), 279–308 (2006) This is the paper where the spiking neural P systems were introduced, and two basic results of this area were proved: universality in the general case, and semilinearity of the computed sets of numbers in the bounded case. Similar results were later obtained for many classes of SN P systems.Google Scholar - 13.M. Ionescu, Gh. Păun, M.J. Pérez-Jiménez, T. Yokomori, Spiking neural dP systems. Fundamenta Informaticae
**11**(4), 423–436 (2011)Google Scholar - 14.S.N. Krishna, A. Păun, Results on catalytic and evolution-communication P systems. New Generat. Comput.
**22**, 377–394 (2004)CrossRefMATHGoogle Scholar - 15.K. Krithivasan, Gh. Păun, A. Ramanujan, On controlled P systems. Fundamenta Informaticae
**131**(3–4), 451–464 (2014)MATHMathSciNetGoogle Scholar - 16.A. Leporati, A.E. Porreca, C. Zandron, G. Mauri, Improving universality results on parallel enzymatic numerical P systems, in
*Proc. 11th Brainstorming Week on Membrane Computing*(Fénix Editora, Sevilla, 2013), pp. 177–200Google Scholar - 17.A. Leporati, A.E. Porreca, C. Zandron, G. Mauri, Enzymatic numerical P systems using elementary arithmetic operations, in
*Proc. 14th Intern. Conf. on Membrane Computing*(Chişinău, Moldova, 2013), pp. 225–240Google Scholar - 18.W. Maass, C. Bishop (eds.)
*Pulsed Neural Networks*(MIT Press, Cambridge, 1999)Google Scholar - 19.M. Mutyam, K. Krithivasan, P systems with membrane creation: Universality and efficiency, in
*Proc. MCU 2001*ed. by M. Margenstern, Y. Rogozhin. LNCS, vol. 2055 (Springer, Berlin, 2001), pp. 276–287Google Scholar - 20.A.B. Pavel, C.I. Vasile, I. Dumitrache, Robot localization implemented with enzymatic numerical P systems, in
*Proc. Conf. Living Machines 2012*, LNCS, vol. 7375 (Springer, 2012), pp. 204–215Google Scholar - 21.A. Păun, Gh. Păun, The power of communication: P systems with symport/antiport. New Generat. Comput.
**20**, 295–305 (2002) The symport/antiport P systems were introduced here, and their universality was proved for rules of various complexities/sizes. These results were improved in a large number of papers, until reaching universality for minimal symport and antiport rules.Google Scholar - 22.Gh. Păun, Computing with membranes. J. Comput. Syst. Sci.
**61**(1), 108–143 (2000) (and Turku Center for Computer Science-TUCS Report 208, November 1998, www.tucs.fi) This is the paper where membrane computing was initiated. The cell-like P systems are introduced, both with symbol objects and string objects, and for both cases the universality was proved (using the characterization of Turing computable sets of numbers as the length sets of languages generated by context-free matrix grammars with appearance checking; later, more direct and simple proofs were obtained, starting from register machines). In the case of strings, both rewriting and splicing rules were investigated. - 23.Gh. Păun, Computing with membranes—A variant. Int. J. Found. Comput. Sci.
**11**(1), 167–182 (2000)CrossRefGoogle Scholar - 24.Gh. Păun, P systems with active membranes: attacking NP-complete problems. J. Autom. Lang. Combinat.
**6**, 75–90 (2001) Membrane division was introduced here, in the general framework of P systems with active membranes (the membranes are explicit parts of the object evolution rules), and a polynomial semi-uniform solution to SAT is provided. Later, uniform solutions were obtained (also for other**NP**-complete problems).Google Scholar - 25.Gh. Păun,
*Membrane Computing. An Introduction*(Springer, Berlin, 2002) This is the first survey of membrane computing, systematizing the notions and the results at only a few years after the initiation of this research area. After an informal introduction (“Membrane computing—what it is and what it is not”) and a chapter providing the biological and the computability prerequisites for the rest of the book, one presents the cell-like P systems with symbol objects and multiset rewriting rules, the systems with symport/antiport rules, the P systems with string objects, and then the tissue-like P systems; their computing power is investigated; then one passes to the computing efficiency (“Trading space for time”), considering P systems with membrane division, membrane creation, string replication, and precomputed resources. Two more chapters present “further technical results” and “(attempts to get) back to reality.” The book ends with a list of open problems and of universality results.Google Scholar - 26.Gh. Păun, Towards “fypercomputations” (in membrane computing), in
*Languages Alive. Essays Dedicated to Jurgen Dassow on the Occasion of His 65 Birthday*ed. by H. Bordihn, M. Kutrib, B. Truthe. LNCS, vol. 7300 (Springer, Berlin, 2012), pp. 207–221 The term “fypercomputation” (coming from “fast computation” and reminding of “hypercomputation” = a computation going beyond the “Turing barrier”) was coined to name situations when a computing device can solve**NP**-complete problems in polynomial time, hence when a significant efficiency speedup is obtained.Google Scholar - 27.Gh. Păun, Some open problems about catalytic, numerical and spiking neural P systems, in
*Proc. 14th Intern. Conf. on Membrane Computing*(Chişinău, Moldova, 2013), pp. 25–34Google Scholar - 28.Gh. Păun, R. Păun, Membrane computing and economics: Numerical P systems. Fundamenta Informaticae
**73**, 213–227 (2006)MATHMathSciNetGoogle Scholar - 29.Gh. Păun, M.J. Pérez-Jiménez, Solving problems in a distributed way in membrane computing: dP systems. Int. J. Comput. Commun. Cont.
**5**(2), 238–252 (2010)Google Scholar - 30.Gh. Păun, G. Rozenberg, A. Salomaa (eds.)
*Handbook of Membrane Computing*(Oxford University Press, 2010) The basics of membrane computing are given in the book [25] (translated in Chinese in 2013), but the domain has fast evolved beyond the contents of the volume; new classes of P systems were introduced; new results and applications were reported. This made both necessary and possible the editing of the present handbook, a comprehensive survey of membrane computing at the level of 2009. Its contents are a suggestive hint to the landscape of membrane computing: 1. An introduction to and an overview of membrane computing (Gh. Păun, G. Rozenberg); 2. Cell biology for membrane computing (D. Besozzi, I.I. Ardelean); 3. Computability elements for membrane computing (Gh. Păun, G. Rozenberg, A. Salomaa); 4. Catalytic P systems (R. Freund, O.H. Ibarra, A. Păun, P. Sosík, H.-C. Yen); 5. Communication P systems (R. Freund, A. Alhazov, Y. Rogozhin, S. Verlan); 6. P automata (E. Csuhaj-Varjú, M. Oswald, G. Vaszil); 7. P systems with string objects (C. Ferretti, G. Mauri, C. Zandron); 8. Splicing P systems (S. Verlan, P. Frisco); 9. Tissue and population P systems (F. Bernardini, M. Gheorghe); 10. Conformon P systems (P. Frisco); 11. Active membranes (Gh. Păun); 12. Complexity – Membrane division, membrane creation (M.J. Pérez-Jiménez, A. Riscos-Núñez, Á. Romero-Jiménez, D. Woods); 13. Spiking neural P systems (O.H. Ibarra, A. Leporati, A. Păun, S. Woodworth); 14. P systems with objects on membranes (M. Cavaliere, S.N. Krishna, A. Păun, Gh. Păun); 15. Petri nets and membrane computing (J. Kleijn, M. Koutny); 16. Semantics of P systems (G. Ciobanu); 17. Software for P systems (D. Díaz-Pernil, C. Graciani, M.A. Gutiérrez-Naranjo, I. Pérez-Hurtado, M.J. Pérez-Jiménez); 18. Probabilistic/stochastic models (P. Cazzaniga, M. Gheorghe, N. Krasnogor, G. Mauri, D. Pescini, F.J. Romero-Campero); 19. Fundamentals of metabolic P systems (V. Manca); 20. Metabolic P dynamics (V. Manca); 21. Membrane algorithms (T.Y. Nishida, T. Shiotani, Y. Takahashi); 22. Membrane computing and computer science (R. Ceterchi, D. Sburlan); 23. Other developments; 23.1. P Colonies (A. Kelemenová); 23.2. Time in membrane computing (M. Cavaliere, D. Sburlan); 23.3. Membrane computing and self-assembly (M. Gheorghe, N. Krasnogor); 23.4. Membrane computing and X-machines (P. Kefalas, I. Stamatopoulou, M. Gheorghe, G. Eleftherakis); 23.5. Q-UREM P systems (A. Leporati); 23.6. Membrane computing and economics (Gh. Păun, R.A. Păun); 23.7 Mobile membranes and mobile ambients (B. Aman, G. Ciobanu); 23.8. Other topics (Gh. Păun, G. Rozenberg)Google Scholar - 31.Gh. Păun, S. Yu, On synchronization in P systems. Fundamenta Informaticae
**38**(4), 397–410 (1999)MATHMathSciNetGoogle Scholar - 32.M.J. Pérez-Jiménez, A. Riscos-Núñez, A. Romero-Jiménez, Complexity— Membrane division and membrane creation. Chapter 12 of [30]Google Scholar
- 33.P. Sosík, A catalytic P system with two catalysts generating a non-semilinear set.
*Romanian J. Inf. Sci. Technology***16**(1), 3–9 (2013)Google Scholar - 34.C.I. Vasile, A.B. Pavel, J. Kelemen, Implementing obstacle avoidance and follower behaviors on Koala robots using numerical P systems, in
*Tenth Brainstorming Week on Membrane Computing*, vol. II (Sevilla, 2012), pp. 215–227Google Scholar - 35.C. Zandron, C. Ferretti, G. Mauri, Solving NP-complete problems using P systems with active membranes, in
*Proc. Unconventional Models of Computation*ed. by I. Antoniou et al. (Springer, 2000), pp. 289–301 Among other results, one proves here the so-called “Milano theorem,” saying that P systems without membrane division cannot solve**NP**-complete problems in polynomial time (unless if**P**=**NP**)Google Scholar - 36.The P Systems Website, www.ppage.psystems.eu