Encyclopedia of Complexity and Systems Science

Living Edition
| Editors: Robert A. Meyers

Membrane Computing, Power and Complexity

  • Marian GheorgheEmail author
  • Andrei Păun
  • Sergey Verlan
  • Gexiang Zhang
Living reference work entry
DOI: https://doi.org/10.1007/978-3-642-27737-5_697-1


Membrane Computing Multiset Rewriting Minimal Correspondence Vector Addition Systems Maximum Parallelism 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Computational completeness

A computing model which is equivalent in power with Turing machines (the Standardabweichung model of algorithmic computing) is said to be computationally complete or Turing complete. In the case of an accepting (resp. generating) model like Turing machines (resp. formal grammars), it is also said that such a model can accept (resp. generate) all recursively enumerable languages.

Membrane structure

In biology, the cells are separated from the environment by a membrane; the internal compartments of a cell (nucleus, mitochondria, Golgi apparatus, vesicles, etc.) are also delimited by membranes. A membrane is a separator of a compartment, and it also has filtering properties (only certain substances, in certain conditions, can pass through a membrane). In a cell, the membranes are hierarchically arranged, and they delimit “protected reactors,” compartments where specific chemicals evolve according to specific reactions. In membrane computing there is a notion of space which consists of such cell-like arrangement of membranes and that is called membrane structure. From a theoretical point of view, this corresponds to a topological space (a partition of the space where the actual size and geometrical form of membranes are not important, and only the “include” or “neighbor” relationships between them are considered). Moreover, a cell-like (hence hierarchical) membrane structure corresponds to a tree; hence, a natural representation is by means of a tree or any mathematical representation of a tree.


A multiset is a set with multiplicities associated with its elements. For instance, {(a, 2), (b, 3)} is the multiset consisting of two copies of element a and three copies of element b. Mathematically, a multiset is identified with a mapping μ from a support set U (an alphabet) to ℕ, the set of natural numbers, μ: U → N. A multiset can be compactly represented by a string wU *, where the number of occurrences of a symbol in w is the multiplicity of that element in the multiset represented by w (a 2 b 3 or ab 2 ab for the example above).

Multiset rewriting

A multiset of objects M evolves by means of (multiset) rewriting rules, of form u → v, where u and v are multisets. Such a rule is applicable if M contains u. The result of its application is the multiset M − u + v, i.e., u is taken off from M, followed by adding v to the result.

Register machine

A register machine is a computing device working with a finite number of registers that can hold natural numbers. It has a program that is composed of three types of instructions: add one to the value of some specified counter, subtract one from the value of some counter, and the if instruction that is performing a zero test on some counter and based on its result transfers the control to another instruction of the program.

The register machines are known to be universal with only two counters. Moreover, it is possible to construct universal machines having a small number of instructions.

Regulated rewriting

The regulated rewriting is the area of formal language theory that studies (context-free) grammars enriched with control mechanisms that restrict possible rule applications at each step.

There are two types of restrictions: context-based conditions and control-based conditions. The first type of conditions specifies the properties of the word that need to be satisfied in order for a rule to be applicable to it. Typical examples are permitting (resp. forbidding) conditions that list symbols that should (resp. should not) be present in the word in order for a rule to be applicable to it.

The control-based conditions specify the properties of the control language (the language consisting of all sequences of applied rules corresponding to a successful computation) that need to be satisfied in order to apply a rule. Typical examples are graph control (all possible sequences of rules are described by a finite automaton) and matrix control (graph control where the automaton is a finite union of loops starting and ending in the initial state).


An important way of selectively passing chemicals across biological membranes is the coupled transport through protein channels. The process of moving two or more objects in the same direction is called symport; the process of simultaneously moving two or more objects across a membrane, in opposite directions, is called antiport. For uniformity, the particular case of moving only one object is called uniport.


The universality is a property of a class of computing (programmable) devices that states that there exists a concrete device that can “simulate” the computation of any device in the corresponding class by taking as additional input the “code” of the device to be simulated. Such “simulation” generally requires the input to be encoded and the output to be decoded using some recursive functions. The typical example for the universality is the class of Turing machines, proved to be universal in Turing (1937). As a consequence, any computationally complete model is universal.


Membrane computing is a branch of natural computing initiated in Păun (2000) which abstracts computing models from the architecture and the functioning of living cells, as well as from the organization of cells in tissues, organs (the brain included), or other higher-order structures. The initial goal of membrane computing was to learn from the cell biology something possibly useful to computer science and the area fast developed in this direction. Several classes of computing models called P systems (or membrane systems) were defined in this context, inspired from biological facts or motivated from mathematical or computer science points of view. A series of applications were reported in the last years; we refer to Ciobanu et al. (2006a) and to Zhang et al. (2017) for a comprehensive overview.

The main ingredients of a P system are (i) the membrane structure, (ii) the multisets of objects placed in the compartments of the membrane structure, and (iii) the rules for processing the objects and the membranes. Thus, membrane computing can be defined as a framework for devising computing models, which process multisets in compartments defined by means of membranes that are arranged in a topological space, usually in a cell-like or tissue-like manner. These models are (in general) distributed and parallel.

Since in many cases the information processing in P systems can be seen as distributed multiset rewriting, there are strong connections between P systems and other multiset rewriting-based models like Petri nets or vector addition systems. Generally, it gives a different point of view on the corresponding problem. There are two main particularities of using P system-based approach for the investigation of multiset rewriting: (a) the explicit notion of (topological) space and of the location of rules and objects in this space and (b) the integration of notions from regulated rewriting area allowing many powerful features. The first point is extremely important as in many problems there is a notion of space and co-location of symbols and rules that is made explicit by using a P system and which shall be encoded and deduced using other multiset-based models. The second point makes the P systems area to perform the most advanced study of the principles of functioning of multiset-based models.

When a P system is considered as a computing device, hence it is investigated in terms of (theoretical) computer science; the main issues investigated concern the computing power (in comparison with standard models from computability theory, especially Turing machines and their restrictions) and the computing efficiency (the possibility of using the parallelism for solving computationally hard problems in a feasible time). Computationally and mathematically oriented ways of using the rules and of defining the result of a computation are considered in this case (e.g., maximal or minimal parallelism, halting, counting objects). When a P system is constructed as a model of a biochemical process, then it is examined in terms of dynamical systems, with the evolution in time being the issue of interest, not a specific output.

From a theoretical point of view, P systems are both powerful (most classes are Turing complete, even when using ingredients of a reduced complexity – a small number of membranes, rules of simple forms, ways of controlling the use of rules directly inspired from biology) and efficient (many classes of P systems, especially those with an enhanced parallelism, can solve computationally hard problems, typically NP-complete problems, but also harder problems, in a feasible time, typically polynomial). Then, as a modeling framework, membrane computing is rather adequate for handling discrete (biological) processes, having many attractive features: easy understandability, scalability and programmability, inherent compartmentalization and nonlinearity, etc. Ideas from cell biology as captured by membrane computing proved to be rather useful in handling various computer science topics – computer graphics, robot control, membrane evolutionary algorithms, used for solving optimization problems, etc.

The literature of membrane computing has grown very fast (already in 2003, Thompson Institute for Scientific Information (ISI) has qualified the initial paper as “fast breaking” and the domain as “emergent research front in computer science” – see http://esi-topics.com ), while the bibliography of the field counts, at the middle of 2016, more than 3000 titles (see the Web site from http://ppage.psystems.eu ). Moreover, the domain is now very diverse, as a consequence of the many motivations of introducing new variants of P systems: to be biologically oriented/realistic, mathematically elegant, computationally powerful, and efficient. That is why it is possible to give here only a few basic notions and only a few (types of) results and of applications. The reader interested in details should consult the monograph Păun (2002), the volume Ciobanu et al. (2006b) where a friendly introduction to membrane computing can be found in the first chapter, the Handbook Păun et al. (2010), the book Zhang et al. (2017), and the comprehensive bibliography from the above mentioned Web page.

Types of P Systems

The field started by looking to the cell in order to learn something possibly useful to computer science, but then the research also considered cell organization in tissues (in general, populations of cells, such as colonies of bacteria) and, recently, also neuron organization in the brain. Thus, at the moment, there are four main types of P systems: (i) cell-like P systems, (ii) tissue-like P systems, (iii) neural-like P systems, and (iv) numerical P systems.

The cell-like P systems imitate the (eukaryotic) cell. Their basic ingredient is the membrane structure, a hierarchical arrangement of membranes (understood as three-dimensional vesicles), delimiting compartments where multisets of symbol objects are placed; rules for evolving these multisets as well as the membranes are provided and also localized, acting in specified compartments or on specified membranes. The objects not only evolve, but they also pass through membranes (we say that they are “communicated” among compartments). The rules can have several forms, and their use can be controlled in various ways: promoters, inhibitors, priorities, etc.

In tissue-like P systems, several one-membrane cells are considered as evolving in a common environment. They contain multisets of objects, while also the environment contains objects. Certain cells can communicate directly (channels are provided between them), and all cells can communicate through the environment. The channels can be given in advance or they can be dynamically established – this latter case appears in so-called population P systems.

Numerical P systems are based on a cell-like membrane structure, but in its compartment evolve numerical variables rather than (biochemical) objects like in a cell-like P system. These variables evolve by means of programs, composed of a production function and a repartition protocol.

There are two types of neural-like P systems. One of them is similar to tissue-like P systems in the fact that the cells (neurons) are placed in the nodes of an arbitrary graph and they contain multisets of objects, but they also have a state which controls the evolution. Another variant called spiking neural P systems, where one uses only one type of objects, the spike, and the main information one works with is the time distance between consecutive spikes.

The cell-like P systems were introduced first and their theory is now very well developed; tissue-like P systems have also attracted a considerable interest, while the neural-like systems, mainly under the form of spiking neural P systems, are only recently investigated.

Another important distinction between variants P systems is made based on the possibility to evolve in time the membrane structure (add/delete/move membranes, change links, etc.). If the structure can be changed, then corresponding models are called P systems with dynamically evolving structure or P systems with active membranes (in the case of cell-like systems). If the structure cannot evolve, corresponding systems are called static P systems. It is worth to note that the deletion and the bounded (limited in number) creation of membranes yield a finite set of possible membrane structures, so corresponding models are variants of static P systems.

Any static P system using multiset rewriting rules operating on objects in membranes can be flattened, e.g., reduced to one membrane (Freund et al. 2013). This makes a strong link between P systems and multiset rewriting, as well as other multiset-based models like Petri nets, vector addition systems, and register machines. Basically this means that any result for any model cited above can be easily transcribed in terms of another model. However, obtained constructions could use some nonstandard or nonmainstream features (e.g., max-step semantics in Petri nets).

While the flattening is a useful tool for the comparison between different models of P systems (as well as linking them to related models), it makes the analysis of the system more difficult as it hides the localization of objects and rules (so, the possible collocations). Hence, in general, the description/definition of P systems makes a particular accent on the underlying structure of the system, making rules and objects to be spatially located in the system.

Many classes of P systems can be obtained by considering various possibilities for the various ingredients. We enumerate here several of these possibilities, without exhausting the list:
  • Objects: symbols, strings of symbols, spikes, arrays, trees, numerical variables, other data structures, combinations

  • Data structure: multisets, sets (languages in the case of strings), fuzzy sets, fuzzy multisets, (algebraic) groups

  • Place of objects: in compartments, on membranes, combined

  • Forms of rules: multiset rewriting, symport/antiport, communication rules, boundary rules, with active membranes, combined, string rewriting, array/trees processing, spike processing

  • Controls on rules: catalysts, priority, promoters, inhibitors, activators, sequencing, energy

  • Form of membrane structure: cell-like (tree), tissue-like (arbitrary graph)

  • Type of membrane structure: static, dynamic, precomputed (arbitrarily large)

  • Timing: synchronized, non-synchronized, local synchronization, time-free

  • Ways of using the rules: maximal parallelism, minimal parallelism, bounded parallelism, sequential parallelism

  • Successful computations: global halting, local halting, with specified events signaling the end of a computation, non-halting

  • Modes of using a system: generative, accepting, computing an input-output function, deciding

  • Types of evolution: deterministic, nondeterministic, confluent, probabilistic

  • Ways to define the output: internal, external, traces, tree of membrane structure, spike train

  • Types of results: set of numbers, set of vectors of numbers, languages, set of arrays, yes/no

We refer to the literature for details, and we only add here the fact that when using P systems as models of biological systems/processes, we have to apply the rules in ways suggested by biochemistry, according to reaction rates or probabilities; in many cases, these rates are computed dynamically, depending on the current population of objects in the system.

General Functioning of P Systems

In short, a P system consists of an (hierarchical) arrangement of membranes, which delimit compartments, where multisets (sets with multiplicities associated with their elements) of abstract objects are placed. These objects correspond to the chemicals from the compartments of a cell; the chemicals swim in water (many of them are bound on membranes, but we do not consider this case here), and their multiplicity matters – that is why the data structure most adequate to this situation is the multiset (a multiset can be seen as a string modulo permutation that is why in membrane computing, one usually represents the multisets by strings). In what follows, the objects are supposed to be unstructured; hence, we represent them by symbols from a given alphabet.

The objects evolve according to rules which are also associated with the regions. The rules say both how the objects are changed and how they can be moved (communicated) across membranes. In many cases the rules can be seen as particular cases of multiset rewriting rules enriched with communication. A particular interest in the area is to restrict possible types of rules, especially following some biological motivation. As example we cite catalytic rules (of form cacu) or symport/antiport rules (of form (u, in; v, out)) motivated by corresponding biological phenomena.

There also are rules which only move objects across membranes, as well as rules for evolving the membranes themselves (e.g., by destroying, creating, dividing, or merging membranes). By using these rules, we can change the configuration of a system (the multisets from their compartments as well as the membrane structure); we say that we get a transition among system configurations.

The rules can be applied in many ways. The basic mode imitates the biological way chemical reactions are performed – in parallel – with the mathematical additional restriction to have a maximal parallelism: one applies a bunch of rules which are maximal; no further object can evolve at the same time by any rule. There might be several such groups (more precisely multisets) of rules, so, in general case, the evolution is performed in a nondeterministic manner.

Besides the maximally parallel mode, there were considered several others: sequential (one rule is used in each step), bounded parallelism (the number of membranes to evolve and/or the number of rules to be used in any step is bounded in advance), set-maximal (or flat) parallelism (at most one rule of each kind can be used in a maximally parallel manner), minimal parallelism (in each compartment where a rule can be used, at least one rule must be used), etc.

A sequence of transitions forms a computation, and with computations which halt (reach a configuration where no rule is applicable), we associate a result, for instance, in the form of the multiset of objects present in the halting configuration in a specified membrane.

This way of using a P system, starting from an initial configuration and computing a number, is a grammar-like (generative) one. We can also work in an automata style: an input is introduced in the system, for instance, in the form of a number represented by the multiplicity of an object placed in a specified membrane, and we start computing; the input number is accepted if and only if the computation halts. A combination of the two modes leads to a functional behavior: an input is introduced in the system (at the beginning, or symbol by symbol during the computation), and also an output is produced. In particular, we can have a decidability case, where the input encodes a decision problem and the output is one of two special objects representing the answers yes and no to the problem.

The generalization of this approach is obvious. We start from the cell, but the abstract model deals with very general notions: membranes interpreted as separators of regions with filtering capabilities, objects, and rules assigned to regions; the basic data structure is the multiset. Thus, membrane computing can be interpreted as a bio-inspired framework for distributed parallel processing of multisets. If the reader is interested by technical details, we suggest to consult Verlan (2013) as starting point that gives a good insight on the overall structure and the semantics of P systems.

As briefly introduced above, the P systems are synchronous systems, and this feature is useful for theoretical investigations (e.g., for obtaining universality results or results related to the computational complexity of P systems). Also non-synchronized systems were considered asynchronous in the standard sense or even time-free, or clock-free (e.g., generating the same output, irrespective of the duration associated with the evolution rules). Similarly, in applications to biology, specific strategies of evolution are considered. We do not enter here into details; rather we refer the reader to the bibliography given below.

Examples of P Systems

In what follows, in order to let the reader get a flavor of membrane computing, we will discuss in some detail only basic cell-like P systems and spiking neural P systems, and we refer to the area literature for other classes.

Basic Cell-Like P Systems

Because in this section we only consider cell-like P systems, they will be simply called P systems.

As said above, we look to the cell structure and functioning, trying to get suggestions for an abstract computing model. The fundamental feature of a cell is its compartmentalization through membranes. Accordingly, the main particularity of a cell-like P system is the membrane structure in the form of a hierarchical arrangement of membranes (thus corresponding to a tree). Figure 1 illustrates this notion and the related terminology.
Fig. 1

A membrane structure

We distinguish the external membrane (corresponding to the plasma membrane and usually called the skin membrane) and several internal membranes; a membrane without any other membrane inside is said to be elementary. Each membrane determines a compartment, also called region, the space delimited from above by it and from below by the membranes placed directly inside, if any exists. The correspondence membrane region is one to one, so that we identify by the same label a membrane and its associated region.

In the basic class of P systems, each region contains a multiset of symbol objects, described by symbols from a given alphabet.

The objects evolve by means of evolution rules, which are also localized, associated with the regions of the membrane structure. The typical form of such a rule is cd → (a, here) (b, out) (b, in), with the following meaning: one copy of object c and one copy of object d react, and the reaction produces one copy of a and two copies of b; the newly produced copy of a remains in the same region (indication here); one of the copies of b exits the compartment, going to the surrounding region (indication out); and the other enters one of the directly inner membranes (indication in). We say that the objects a, b, and b are communicated as indicated by the commands associated with them in the right-hand member of the rule. When an object exits the skin membrane, it is “lost” in the environment, and it possibly never comes back into the system. If no inner membrane exists (i.e., the rule is associated with an elementary membrane), then the indication in cannot be followed, and the rule cannot be applied.

As discussed in the previous section, membrane structure and the multisets of objects from its compartments identify a configuration of a P system. By a nondeterministic maximally parallel use of rules as suggested above, we pass to another configuration; such a step is called a transition. A sequence of transitions constitutes a computation. A computation is successful if it halts, and it reaches a configuration where no rule can be applied to the existing objects. With a halting computation, we can associate a result in various ways. The simplest possibility is to count the objects present in the halting configuration in a specified elementary membrane; this is called internal output. We can also count the objects which leave the system during the computation, and this is called external output. In both cases, the result is a number. If we distinguish among different objects, then we can have as the result a vector of natural numbers.

The objects which leave the system can also be arranged in a sequence according to the moments when they exit the skin membrane, and in this case the result is a string.

Because of the non-determinism of the application of rules, starting from an initial configuration, we can get several successful computations, hence several results. Thus, a P system computes (one also uses to say generates) a set of numbers, or a set of vectors of numbers, or a language.

In general, a (cell-like) P system is formalized as a construct
$$ \varPi =\left(O,\mu, {w}_1,\dots, {w}_m,{R}_1,\dots, {R}_m,{i}_0\right), $$

where O is the alphabet of objects (sometimes it can be split into several alphabets according to specific restrictions), μ is the membrane structure (with m membranes), w 1, …, w m are multisets of objects present in the m regions of μ at the beginning of a computation, R 1, …, R m are finite sets of evolution rules associated with the regions of μ, and i 0 is the label of a membrane, used as the output membrane.

We end this section with a simple example, illustrating the architecture and the functioning of a (cell-like) P system. Figure 2 indicates the initial configuration (the rules included) of a system which computes a function, namely, nn 2, for any natural number n≥1. Besides catalytic (rules of type cacv) and noncooperative (context-free) rules, the system also contains a rule with promoters (permitting context conditions), \( {\left.{b}_2\to {b}_2\left(e,\mathrm{in}\right)\right|}_{b_1} \): the object b 2 evolves to b 2 e only if at least one copy of object b 1 is present in the same region.
Fig. 2

A P system with catalysts and promoters

In symbols, the system is given as follows:

Π = (O, C, μ, ω 1, ω 2, R 1, R 2, i 0), where
  • \( O=\left\{a,{b}_1,{b}_1^{\prime },{b}_2,c,e\right\} \) (the set of objects)

  • C = {c} (the set of catalysts)

  • μ = [1[2]2]1 (membrane structure)

  • ω1 = c (initial objects in region 1)

  • ω2 = c (initial objects in region 2)

  • $$ {\left.{R}_1=\Big\{a\to {b}_1{b}_2,{cb}_1\to c{b}_1^{\prime },{b}_2\to {b}_2\left(e,\mathrm{in}\right)\right|}_{b_1}\Big\} $$
  • R 2 = ∅ (rules in region 2)

  • io = 2 (the output region)

We start with only one object in the system, the catalyst c. If we want to compute the square of a number n, then we have to input n copies of the object a in the skin region of the system. In that moment, the system starts working, by using the rule ab 1 b 2, which has to be applied in parallel to all copies of a; hence, in one step, all objects a are replaced by n copies of b 1 and n copies of b 2. From now on, the other two rules from region 1 can be used. The catalytic rule \( {cb}_1\to c{b}_1^{\prime } \) can be used only once in each step, because the catalyst is present in only one copy. This means that in each step, one copy of b 1 gets primed. Simultaneously (because of the maximal parallelism), the rule \( {\left.{b}_2\to {b}_2\left(e,\mathrm{in}\right)\right|}_{b_1} \) should be applied as many times as possible, and this means n times, because we have n copies of b 2. Note the important difference between the promoter b 1, which allows using the rule \( {\left.{b}_2\to {b}_2\left(e,\mathrm{in}\right)\right|}_{b_1} \), and the catalyst c: the catalyst is involved in the rule and it is counted when applying the rule, while the promoter makes possible the use of the rule, but it is not counted; the same (copy of an) object can promote any number of rules. Moreover, the promoter can evolve at the same time by means of another rule (the catalyst is never changed).

In this way, in each step we change one b 1 to b 1 and we produce n copies of e (one for each copy of b 2); the copies of e are sent to membrane 2 (the indication in from the rule \( {\left.{b}_2\to {b}_2\left(e,\mathrm{in}\right)\right|}_{b_1} \)). The computation should continue as long as there are applicable rules. This means exactly n steps: in n steps, the rule \( {cb}_1\to c{b}_1^{\prime } \) will exhaust the objects b 1, and in this way neither this rule can be applied nor \( {\left.{b}_2\to {b}_2\left(e,\mathrm{in}\right)\right|}_{b_1} \)) because its promoter does no longer exist. Consequently, in membrane 2, considered as the output membrane, we get n copies of object e.

Note that the computation is deterministic, always the next configuration of the system is unique, and that changing the rule \( {\left.{b}_2\to {b}_2\left(e,\mathrm{in}\right)\right|}_{b_1} \) with \( {\left.{b}_2\to {b}_2\left(e,\mathrm{in}\right)\right|}_{b_1} \), the n 2 copies of e will be sent to the environment; hence, we can read the result of the computation outside the system, and in this case membrane 2 is useless.

Spiking Neural P Systems

Spiking neural P systems (SN P systems) were introduced in Ionescu et al. (2006) with the aim of defining P systems based on ideas specific to spiking neurons, much investigated in neural computing.

Very shortly, an SN P system consists of a set of neurons (cells consisting of only one membrane) placed in the nodes of a directed graph and sending signals (spikes, denoted in what follows by the symbol a) along synapses (arcs of the graph). Thus, the architecture is that of a tissue-like P system, with only one kind of object present in the cells. The objects evolve by means of spiking rules, which are of the form E|a c a; d, where E is a regular expression over {a} and c, d are natural numbers, c≥1, d≥0. The meaning is that a neuron containing k spikes such that a k L(E), k>c can consume c spikes and produce one spike, after a delay of d steps.

This spike is sent to all neurons to which a synapse exists outgoing from the neuron where the rule was applied. There also are forgetting rules, of the form a s λ, with the meaning that s ≥1 spikes are forgotten, provided that the neuron contains exactly s spikes. We say that the rules “cover” the neuron, and all spikes are taken into consideration when using a rule. The system works in a synchronized manner, i.e., in each time unit, each neuron which can use a rule should do it, but the work of the system is sequential in each neuron: only (at most) one rule is used in each neuron. One of the neurons is considered to be the output neuron, and its spikes are also sent to the environment. The moments of time when a spike is emitted by the output neuron are marked with 1; the other moments are marked with 0. This binary sequence is called the spike train of the system – it might be infinite if the computation does not stop. In the spirit of spiking neurons, the result of a computation is encoded in the distance between consecutive spikes sent into the environment by the (output neuron of the) system. For example, we can consider only the distance between the first two spikes of a spike train, the distance between the first k spikes, or the distances between all consecutive spikes, taking into account all intervals or only intervals that alternate, all computations or only halting computations, etc.

An SN P system can also be used in the accepting mode: a neuron is designated as the input neuron and two spikes are introduced in it, at an interval of n steps; the number n is accepted if the computation halts.

Another possibility is to consider the spike train itself as the result of a computation, and then we obtain a (binary) language-generating device. We can also consider input neurons and then an SN P system can work as a transducer. Languages on arbitrary alphabets can be obtained by generalizing the form of rules: take rules of the form E|a c a p ; d, with the meaning that, provided that the neuron is covered by E, c spikes are consumed and p spikes are produced and sent to all connected neurons after d steps (such rules are called extended). Then, with a step when the system sends out i spikes, we associate a symbol b i , and thus we get a language over an alphabet with as many symbols as the number of spikes simultaneously produced. Another natural extension is to consider several output neurons, thus producing vectors of numbers, not only single numbers.

Also for SN P systems, we skip the technical details, but we consider a simple example. We give it first in a formal manner (if a rule E|a c a; d has L(E) = {a c }, and then we write it in the simplified form a c a; d):
  • Π 1 = (O, σ 1, σ 2, σ 3, syn, out), with

  • O = {a} (alphabet, with only one object, the spike)

  • σ 1 = (2, {a 2|aa; 0, aλ}) (First neuron: initial spikes, rules)

  • σ 2 = (1, {aa; 0, aa; 1}) (Second neuron: initial spikes, rules)

  • σ 3 = (3, {a 3a; 0, aa; 1, a 2λ}) (Third neuron: initial spikes, rules)

  • syn = {(1, 2), (2, 1), (1, 3), (2, 3)} synapses

  • out = 3 (output neuron)

This system is represented in a graphical form in Fig. 3 and it functions as follows. All neurons can fire in the first step, with neuron σ 2 choosing nondeterministically between its two rules. Note that neuron σ 1 can fire only if it contains two spikes; one spike is consumed and the other remains available for the next step.
Fig. 3

An SN P system generating all natural numbers greater than 1

Both neurons σ 1 and σ 2 send a spike to the output neuron, σ 3; these two spikes are forgotten in the next step. Neurons σ 1 and σ 2 also exchange their spikes; thus, as long as neuron σ 2 uses the rule aa; 0, the first neuron receives one spike, thus completing the needed two spikes for firing again.

However, at any moment, starting with the first step of the computation, neuron σ 2 can choose to use the rule aa; 1. On the one hand, this means that the spike of neuron σ 1 cannot enter neuron σ 2, and it only goes to neuron σ 3; in this way, neuron σ 2 will never work again because it remains empty. On the other hand, in the next step, neuron σ 1 has to use its forgetting rule aλ, while neuron σ 3 fires, using the rule aa; 1. Simultaneously, neuron σ 2 emits its spike, but it cannot enter neuron σ 3 (it is closed this moment); the spike enters neuron σ 1, but it is forgotten in the next step. In this way, no spike remains in the system. The computation ends with the expelling of the spike from neuron σ 3. Because of the waiting moment imposed by the rule aa; 1 from neuron σ 3, the two spikes of this neuron cannot be consecutive, but at least two steps must exist in between.

Thus, we conclude that Π1 computes/generates all natural numbers greater than or equal to 2.

Generalized Communicating P Systems

Generalized communicating P systems were introduced in Verlan et al. (2008) as the generalization of minimal symport/antiport operations. The main idea is to model a conservative system where the objects cannot be transformed (rewritten), but only are moved through the membranes of the system. Moreover, this movement is performed in a particular manner which can be seen as a pairwise synchronization of objects. More precisely, the system structure is a so-called network of cells, which is a list of cells having an identifier and a multiset content. The generalized communication rule is written as (a, i) (b, j) → (a, k) (b, l), which indicates that if an object a is present in cell (membrane) with the id (number) i and the object b is present in cell j, then they can synchronize, and as the result of the application of the rule, object a moves to cell k and object b moves to cell l. In case there are several copies of a and b in corresponding cells, only one of them is moved using one rule (this is called 1: 1 mode). In the 1: all mode, one copy of a and all copies of b are moved to the corresponding cells. In the general case, indexes i, j, k, l can be pairwise different; however if some of them coincide, then restricted variants of the model are obtained (in particular symport/antiport). We remark that rules of the system induce a structure corresponding to a hypergraph.

The evolution mode is maximally parallel; this means that the whole system can be seen as a (maximally) parallel evolution of signals in a network, where only pairwise synchronization of signals is permitted. The result of the computation is the number of objects in some designated output cell.

Since no objects are rewritten or produced, the system is finite. In order to increase the computational power, a special cell, the environment, labeled by 0 is introduced. This cell contains an infinite supply of some particular objects.

We will consider an example of the computation of the n 2 function. We start by introducing a graphical notation for the generalized communicating rules as shown in Fig. 4.
Fig. 4

Graphical notation for the rule (a, i) (b, j) → (a, k) (b, l) in 1: 1 mode (left) and 1: all mode (right)

Figure 5 gives the description of the system. We will skip the textual representation and the discussion about using a single mode and refer to Verlan et al. (2008) for the missing details. We only remark that we depicted cell 0 three times in order to simplify the picture – it should be read as the same cell in all three cases.
Fig. 5

Generalized communicating P system computing n 2. The rules are numbered for convenience

The initial value is given as the number of symbols A in cell 1. The system is performing the following algorithm (with the initial values 1 for B and 0 for C):

1 while (A>0){
2 A=A−1
3 C=C+B
4 B=B+2
5 }

Clearly, at the end of the above algorithm, we have that C = A 2. Technically, the three steps of the inner loop are made using the additional symbol S (used as instruction pointer to sequence the operations). The first rule corresponds to line 2 of the algorithm. Indeed, it decreases the number of symbols A in cell 1, and the instruction pointer S moves to the next instruction.

Rule 2 operates in 1: all mode and moves all symbols B from cell 5 to cell 6. Further, rule 3 is applied in a maximally parallel manner, which means that for every B moved to cell 7, a symbol C is moved from the environment (cell 0) to cell 8. We recall that the environment contains an unbounded number of objects B and C. The overall action of rules 2 and 3 corresponds to the execution of the line 3 of the algorithm.

Next, rule 4 is executed, operating in 1: all mode. This allows to move all objects B from cell 7 to cell 5. The next two rules (5 and 6) allow to increment by 2 the number of objects B in cell 5, corresponding to line 4 of the algorithm.

Now object S returned to cell 2 (and objects B to cell 5), so a new iteration can start.

The system stops when there are no more objects A. It is easy to see that in this case, no rule is applicable, so the halting condition occurs.

Computing Power

As we have mentioned before, many classes of P systems, combining various ingredients (as described above or similar), are able of simulating register machines; hence they are computationally complete. Always, the proofs of results of this type are constructive, and this has an important consequence from the computability point of view: there are universal (hence programmable) P systems. In short, starting from a universal Turing machine (or an equivalent universal device), we get an equivalent universal P system. Among others, this implies that in the case of Turing complete classes of P systems, the hierarchy on the number of membranes always collapses (at most at the level of the universal P systems). Actually, the number of membranes sufficient in order to characterize the power of Turing machines by means of P systems is always rather small (one to two in most of the cases). We only mention here four of the most interesting (types of) universality results for cell-like P systems:
  1. 1.

    P systems with symbol objects with catalytic rules, using only two catalysts and one membrane, are computationally complete.

  2. 2.

    P systems with minimal symport/antiport rules (where at most two objects are involved in a rule) using two membranes are computationally complete.

  3. 3.

    P systems with symport/antiport rules (of arbitrary size), using only two objects and seven membranes, are computationally complete.

  4. 4.

    P systems with symport/antiport rules with inhibitors (forbidding conditions), one membrane and having only 16 rules, are computationally complete.


There are several other similar results, improvements, or extensions of them. Many results are also known for tissue-like P systems. Details can be found, e.g., in the proceedings of the yearly Conference on Membrane Computing and Asian Conference on Membrane Computing mentioned in the bibliography of this entry. Most universality results were obtained in the deterministic case, but there also are situations where the deterministic systems are strictly less powerful than the nondeterministic ones. This is proven in Ibarra and Yen (2006), for the accepting catalytic P systems.

The hierarchy on the number of membranes collapses in many cases also for nonuniversal classes of P systems, but there also are cases when “the number of membrane matters,” to cite the title of Ibarra (2003), where two classes of P systems were defined for which the hierarchies on the number of membranes are infinite.

Also various classes of SN P systems are computationally complete as devices which generate or accept sets of numbers. This is true when no bound is imposed on the number of spikes present in any neuron; if such a bound exists, then the sets of numbers generated (or accepted) are semilinear.

Computational Efficiency

The computational power (the “competence”) is only one of the important questions to be dealt with when defining a new (bio-inspired) computing model. The other fundamental question concerns the computing efficiency. Because P systems are parallel computing devices, it is expected that they can solve hard problems in an efficient manner – and this expectation is confirmed for systems provided with ways for producing an exponential workspace in a linear time. However, we would like to remark that there are no physical implementations of P systems able to produce an exponential workspace in a linear time (and going beyond toy examples), so the corresponding research has rather theoretical importance.

Three main such biologically inspired possibilities have been considered so far in the literature, and all of them were proven to lead to polynomial solutions to NP-complete problems. These three ideas are membrane division, membrane creation, and string replication. The standard problems addressed in this framework were decidability problems, starting with SAT, the Hamiltonian path problem, and the node-covering problem, but also other types of problems were considered, such as the problem of inverting one-way functions or the subset-sum and the knapsack problems (note that the last two are numerical problems, where the answer is not of the yes/no type, as in decidability problems).

Roughly speaking, the framework for dealing with complexity matters is that of accepting P systems with input: a family of P systems of a given type is constructed starting from a given problem, and an instance of the problem is introduced as an input in such systems; working in a deterministic mode (or a confluent mode: some non-determinism is allowed, provided that the branching converges after a while to a unique configuration, or, in the weak confluent case, all computations halt and all of them provide the same result), in a given time, one of the answers yes/no is obtained, in the form of specific objects sent to the environment. The family of systems should be constructed in a uniform mode by a Turing machine, working a polynomial time.

This direction of research is very active at the present moment. More and more problems are considered, the membrane computing complexity classes are refined, characterizations of the PNP conjecture were obtained in this framework, and several characterizations of the class P, even problems which are PSPACE complete, were proven to be solvable in polynomial time by means of membrane systems provided with membrane division or membrane creation. An important (and difficult) problem is that of finding the borderline between efficiency and non-efficiency: which ingredients should be used in order to be able to solve hard problems in a polynomial time? Many results in this respect were reported by M.J. Pérez-Jimenez and his co-workers (see “Bibliography”), but still many problems remain open in this respect.

Future Directions

Although so much developed in less than 18 years since the investigations were initiated, membrane computing still has a large number of open problems and research topics which wait for research efforts.

A general class of theoretical questions concerns the borderline between universality and nonuniversality or between efficiency and non-efficiency, i.e., concerning the succinctness of P systems able to compute at the level of Turing (or register) machines or to solve hard problems in polynomial time, respectively. Then, because universality implies undecidability of all nontrivial questions, an important issue is that of finding classes of P systems with decidable properties.

This is also related to the use of membrane computing as a modeling framework: if no insights can be obtained in an analytical manner, algorithmically, then what remains is to simulate the system on a computer. To this aim, better programs are still needed, maybe parallel implementations, able to handle real-life questions (for instance, in the quorum sensing area, existing applications deal with hundreds of bacteria, but biologists would need simulations at the level of thousands of bacteria in order to get convincing results).

We give below some interesting topics that arise last years; however we recommend the reading of the Bulletin of the International Membrane Computing Society as well as the proceedings of the recent conferences on Membrane Computing to stay up to date with the newest research trends in the area.

At the time of writing this material, the following topics attract a lot of research effort (we concentrate on the theoretical topics; other questions related to applications are discussed in another chapter of this book):
  • Introduction and the investigation of different variants of spiking neural P systems. The topics vary from the introduction of new (biologically motivated) ingredients to the computational completeness, universality, and efficiency.

  • Investigation of P colonies, which is a model of P systems using extremely simple rules and resources.

  • Investigation of P systems where the multiset structure and operations are replaced by generalized variants. Simplest versions are fuzzy and rough multisets; recent development makes use of generalized multisets which are a function from an alphabet to an abelian group.

  • Investigation of different new properties of P systems, like new derivation modes and halting conditions and new types of objects and rules with special semantics.

  • P system simulator design and questions related to efficient implementations.

  • Verification frameworks using P systems, especially based on kernel P systems.



The work of G. Zhang was supported by the National Natural Science Foundation of China (61373047 and 61672437) and the Research Project of Key Laboratory of Fluid and Power Machinery (Xihua University), Ministry of Education, P.R. China (JYBFXYQ-1).


Primary Literature

  1. Ciobanu G, Păun G, Pérez-Jiménez MJ (2006a) Applications of membrane computing, vol 17. Springer, BerlinGoogle Scholar
  2. Ciobanu G, Pérez-Jiménez MJ, Păun Gh (2006b) Applications of membrane computing. Natural computing series. Springer, BerlinGoogle Scholar
  3. Freund R, Leporati A, Mauri G, Porreca AE, Verlan S, Zandron C (2013) Flattening in (tissue) P systems. In: Alhazov A, Cojocaru S, Gheorghe M, Rogozhin Y, Rozenberg G, Salomaa A (eds) Membrane computing – 14th international conference, CMC 2013, Chişinău, 20–23 Aug 2013, Revised Selected Papers, Springer, Lecture notes in computer science, vol 8340, pp 173–188Google Scholar
  4. Ibarra OH (2003) The number of membranes matters. In: Martín-Vide C, Mauri G, Paun G, Rozenberg G, Salomaa A (eds) Membrane computing, international workshop, WMC 2003, Tarragona, 17–22 July 2003, Revised Papers, Springer, Lecture notes in computer science, vol 2933, pp 218–231. doi: https://doi.org/10.1007/978-3-540-24619-0_16
  5. Ibarra OH, Yen H (2006) Deterministic catalytic systems are not universal. Theor Comput Sci 363(2):149–161.  https://doi.org/10.1016/j.tcs.2006.07.029 MathSciNetCrossRefzbMATHGoogle Scholar
  6. Ionescu M, Păun G, Yokomori T (2006) Spiking neural P systems. Fundam Informaticae 71(2–3):279–308MathSciNetzbMATHGoogle Scholar
  7. Păun G (2000) Computing with membranes. J Comput Syst Sci 61(1):108–143.  https://doi.org/10.1006/jcss.1999.1693, also Turku Center for Computer Science Report TUCS 208, Nov 1998MathSciNetCrossRefzbMATHGoogle Scholar
  8. Păun Gh (2002) Membrane computing: an introduction. Natural Computing Series. Springer-Verlag Berlin HeidelbergGoogle Scholar
  9. Păun G, Rozenberg G, Salomaa A (eds) (2010) The Oxford handbook of membrane computing. Oxford University Press, OxfordzbMATHGoogle Scholar
  10. Turing AM (1937) On computable numbers, with an application to the Entscheidungsproblem. Proc London Math Soc s2–42(1):230–265.  https://doi.org/10.1112/plms/s2-42.1.230 MathSciNetCrossRefzbMATHGoogle Scholar
  11. Verlan S (2013) Using the formal framework for P systems. In: Alhazov A, Cojocaru S, Gheorghe M, Rogozhin Y, Rozenberg G, Salomaa A (eds) Membrane computing – 14th international conference, CMC 2013, Chişinău, 20–23 Aug 2013, Revised Selected Papers, Springer, Lecture notes in computer science, vol 8340, pp 56–79Google Scholar
  12. Verlan S, Bernardini F, Gheorghe M, Margenstern M (2008) Generalized communicating P systems. Theor Comput Sci 404(1–2):170–184MathSciNetCrossRefzbMATHGoogle Scholar
  13. Zhang G, PÃl’rez-JimÃl’nez MJ, Gheorghe M (2017) Real-life applications with membrane computing. Emergence, complexity and computation, vol 25. Springer International Publishing, ChamGoogle Scholar

Books and Reviews

  1. Alhazov A, Cojocaru S, Gheorghe M, Rogozhin Y, Rozenberg G, Salomaa A (eds) (2014) Membrane computing – 14th international conference, CMC 2013, Chişinău, 20–23 Aug 2013, Revised Selected Papers, Lecture notes in computer science, vol 8340, Springer. doi: https://doi.org/10.1007/978-3-642-54239-8
  2. Corne DW, Frisco P, Păun Gh, Rozenberg G, Salomaa A (eds) (2009) Membrane computing – 9th international workshop, WMC 2008, Edinburgh, 28–31 July 2008, Revised Selected and Invited Papers, Lecture notes in computer science, vol 5391, Springer. doi: https://doi.org/10.1007/978-3-540-95885-7
  3. Csuhaj-Varjú E, Gheorghe M, Rozenberg G, Salomaa A, Vaszil G (eds) (2013) Membrane csomputing – 13th international conference, CMC 2012, Budapest, 28–31 Aug 2012, Revised Selected Papers, Lecture notes in computer science, vol 7762, Springer. doi: https://doi.org/10.1007/978-3-642-36751-9
  4. Eleftherakis G, Kefalas P, Păun Gh, Rozenberg G, Salomaa A (eds) (2007) Membrane computing, 8th international workshop, WMC 2007, Thessaloniki, 25–28 June 2007 Revised Selected and Invited Papers, Lecture notes in computer science, vol 4860, SpringerGoogle Scholar
  5. Freund R, Păun Gh, Rozenberg G, Salomaa A (eds) (2006) Membrane computing, 6th international workshop, WMC 2005, Vienna, 18–21 July 2005, Revised Selected and Invited Papers, Lecture notes in computer science, vol 3850, SpringerGoogle Scholar
  6. Frisco P (2009) Computing with cells: advances in membrane computing. OUP, OxfordCrossRefzbMATHGoogle Scholar
  7. Gheorghe M, Hinze T, Păun Gh, Rozenberg G, Salomaa A (eds) (2011) Membrane computing – 11th international conference, CMC 2010, Jena, 24–27 Aug 2010. Revised Selected Papers, Lecture notes in computer science, vol 6501, Springer. doi: https://doi.org/10.1007/978-3-642-18123-8
  8. Gheorghe M, Păun Gh, Rozenberg G, Salomaa A, Verlan S (eds) (2012) Membrane computing – 12th international conference, CMC 2011, Fontainebleau, 23–26 Aug 2011, Revised Selected Papers, Lecture notes in computer science, vol 7184, Springer. doi: https://doi.org/10.1007/978-3-642-28024-5
  9. Gheorghe M, Rozenberg G, Salomaa A, Sosík P, Zandron C (eds) (2014) Membrane computing – 15th international conference, CMC 2014, Prague, 20–22 Aug 2014, Revised Selected Papers, Lecture notes in computer science, vol 8961, Springer. doi: https://doi.org/10.1007/978-3-319-14370-5
  10. Hoogeboom HJ, Păun Gh, Rozenberg G, Salomaa A (eds) (2006) Membrane computing, 7th international workshop, WMC 2006, Leiden, 17–21 July 2006, Revised, Selected, and Invited Papers, Lecture notes in computer science, vol 4361, Springer. doi: https://doi.org/10.1007/11963516
  11. Leporati A, Rozenberg G, Salomaa A, Zandron C (eds) (2017) Membrane computing – 17th international conference, CMC 2016, Milan, 25–29 July 2016, Revised Selected Papers, Lecture notes in computer science, vol 10105, Springer. doi: https://doi.org/10.1007/978-3-319-54072-6
  12. Martín-Vide C, Mauri G, Păun Gh, Rozenberg G, Salomaa A (eds) (2004) Membrane computing, international workshop, WMC 2003, Tarragona, 17–22 July 2003, Revised Papers, Lecture notes in computer science, vol 2933, SpringerGoogle Scholar
  13. Mauri G, Păun Gh, Pérez-Jiménez MJ, Rozenberg G, Salomaa A (eds) (2005) Membrane computing, 5th international workshop, WMC 2004, Milan, 14–16 June 2004, Revised Selected and Invited Papers, Lecture notes in computer science, vol 3365, SpringerGoogle Scholar
  14. Păun Gh, Pérez-Jiménez MJ, Riscos-Núñez A, Rozenberg G, Salomaa A (eds) (2010) Membrane computing, 10th international workshop, WMC 2009, Curtea de Arges, 24–27 Aug 2009. Revised Selected and Invited Papers, Lecture notes in computer science, vol 5957, Springer. doi: https://doi.org/10.1007/978-3-642-11467-0
  15. Rozenberg G, Salomaa A, Sempere JM, Zandron C (eds) (2015) Membrane computing – 16th international conference, CMC 2015, Valencia, 17–21 Aug 2015, Revised Selected Papers, Lecture notes in computer science, vol 9504, Springer. doi: https://doi.org/10.1007/978-3-319-28475-0

Copyright information

© Springer Science+Business Media LLC 2017

Authors and Affiliations

  • Marian Gheorghe
    • 1
    Email author
  • Andrei Păun
    • 2
  • Sergey Verlan
    • 3
  • Gexiang Zhang
    • 4
    • 5
    • 6
  1. 1.School of Electrical Engineering and Computer ScienceUniversity of BradfordBradfordUK
  2. 2.Department of Computer ScienceUniversity of BucharestBucharestRomania
  3. 3.LACLUniversité Paris Est CréteilCréteilFrance
  4. 4.Robotics Research CenterXihua UniversityChengduChina
  5. 5.Key Laboratory of Fluid and Power MachineryXihua University, Ministry of EducationChengduChina
  6. 6.School of Electrical EngineeringSouthwest Jiaotong UniversityChengduChina