1 Introduction

The formal operation of splicing on strings has been introduced in [5] as an abstraction of the biological phenomenon of DNA recombination under the effect of restriction and ligases enzymes. The biological phenomenon is illustrated in Fig. 1. We give here a few informal explanations. Two DNA molecules (the blue and the red ones) are cut by a restriction enzyme (in this case the enzyme is EcoRI). This process yields fragments with Watson–Crick complementary tails called “sticky ends”. These sticky ends may join again leading to the recombination of DNA. To fix the new combination, a DNA enzyme called ligase seals the gaps after the sticky ends are joint.

Fig. 1
figure 1

Splicing operation (Klug and Cummings 1997)

We follow [15] with the formal definition of splicing as an operation on pairs of strings. First, we need to define what a splicing rule is: a quadruple of strings specifying the subsequences in the two strings where the strings are cut. Therefore, a splicing rule is intended to abstract the restriction enzymes and its subsequences indicate the sites where the enzymes cut. Different computational models based on the iteration of this operation may be defined. Thus, a generating splicing system initiates a computation starting from a given finite set of strings (axioms) and iteratively applying splicing rules, from a given finite set of such rules, producing eventually a language. This computational model was introduced in [5]; further on, the model and its variants have intensively been investigated. Splicing operation, as a formal operation on words and languages, has been vividly studied for more than two decades. There have been published a lot of papers as well as several books containing chapters devoted to this topic. We mention here just a few of them [6, 9, 16], containing extensive chapters about splicing, as well as [7, 8], containing chapters that intend to discuss various applications. There are two types of splicing systems: generating systems, which generates a language by iteratively applying splicing rules to the strings obtained starting from a finite set of strings, and accepting system, which starts out with just one initial string and a finite set of axioms and an iterative splicing as above is initiated. The computation halts when at least one string from a predefined set is obtained. The input string is accepted as soon as the system halts. The accepting splicing system has been introduced by Mitrana et al. in [13], while different variants have been studied in [1, 4, 14], etc.

In [10] a highly parallel and distributed computational model based on the splicing operation was introduced: network of splicing processors (NSP). This model consists in an undirected graph whose nodes host a splicing processor. A splicing processor consists in a finite set of splicing rules, a finite set of strings (axioms) and four sets of symbols, such that two of them define the input filter while the other two define the output filter. A computation in a network of splicing processors (NEP, for short) is a sequence of splicing and communication steps which alternate with each other. In a splicing step, each processor applies, in parallel, the splicing rules it contains to all the strings existing at that moment in the processor. Note that we assume that each string appearing in a processor at some moment, appears actually in an unlimited number of identical copies such that different copies may be rewritten by different splicing rules. In a communication step, all the strings existing in the network nodes are simultaneously are expelled from their nodes, provided that they can pass the output filters of the nodes. In the same communication step, arbitrary large number of copies of each string expelled from one node (sender) enter all the nodes (receivers) connected to the sender, provided that the string can pass the input filters of the receivers. The computation halts as soon as a predefined node, called Halt, contains at least a string.

Several variants of NSP have been considered so far, most of them being computationally complete, see, e.g., [2, 3, 10,11,12]. These networks have an ad hoc underlying graph structure. By different reasons like: possible implementations, uniformity, comparisons, etc., it would be useful to have networks with a fixed and well known topology as: complete graph, star, grid, etc. This is actually the aim of this work: to investigate the possibility of transforming a given NSP into an equivalent NSP with an underlying graph of such a predefined structure. We are interested not only in the construction of these networks but also in comparing the computational time and size of the constructed networks with those of the original ones.

2 Basic definitions

In this section we introduce the main concepts and notations that will be used in the sequel. For those notions not defined here we refer to [17].

An alphabet is a finite and nonempty set of symbols. The cardinality of a finite set A is written card(A). Any finite sequence of symbols from an alphabet V is called string over V. The set of all strings over V is denoted by \(V^*\) and the empty string is denoted by \(\varepsilon\). The length of a string x is denoted by |x| while alph(x) denotes the minimal alphabet W such that \(x\in W^*\). A language over the alphabet V is a set \(L\subseteq V^*\).

We give now the formal definition of the splicing operation following [15]. A splicing rule over a finite alphabet V is a quadruple of strings of the form \([(u_{1},u_{2});(v_{1},v_{2})]\) such that \(u_{1}\), \(u_{2}\), \(v_{1}\), and \(v_{2}\) are in \(V^{*}\). For a splicing rule \(r = [(u_{1},u_{2});(v_{1},v_{2})]\) and for \(x,y,z\in V^{*}\), we say that r produces z from x and y (denoted by \((x,y)\vdash _{r}z\)) if there exist some \(x_{1}, x_{2}, y_{1}, y_{2}\in V^{*}\) such that \(x=x_{1}u_{1}u_{2}x_{2}\), \(y=y_{1}v_{1}v_{2}y_{2}\), and \(z=x_{1}u_{1}v_{2}y_2\). For a language L over V and a set of splicing rules R we define

$$\begin{aligned} \sigma _{R}(L)=\{z\in V^{*}\mid \exists u, v \in L, \exists r \in R \text{ such } \text{ that } (u, v) \vdash _r z\}. \end{aligned}$$

A short discussion is in order here. As one can see, the splicing rule defined above is a 1-splicing rule in the sense of [6]. However, in the rest of the paper we do not make any difference between the two strings a splicing rule is applied to, therefore we may say that the rules are actually 2-splicing rules .

Let V be an alphabet; we now define two predicates, one with strong conditions (s) and another with weak restrictions (w), for a string \(z\in V^+\) and two disjoint subsets PF of V as follows:

$$\begin{aligned} \varphi ^{(s)}(z;P,F)\equiv & {} P\subseteq alph(z) \,\wedge \,F\cap alph(z)=\emptyset \\ \varphi ^{(w)}(z;P,F)\equiv & {} alph(z)\cap P \ne \emptyset \,\wedge \,F\cap alph(z)=\emptyset . \end{aligned}$$

In the definition of these predicates, the set P is a set of permitting symbols while the set F is a set of forbidding symbols. Informally, both conditions require that no forbidding symbol occurs in z. As one can see, the former condition is stronger than the second one since it requires that all permitting symbols are present in z, while the latter requires that at least one permitting symbol appears in z.

These predicates are extended to a language \(L\subseteq V^*\) by

$$\begin{aligned} \varphi ^\beta (L,P,F)=\{w\in L\mid \varphi ^\beta (w;P,F)\}, \end{aligned}$$

with \(\beta \in \{(s),(w)\}\).

A splicing processor over an alphabet V is a 6-tuple (SAPIFIPOFO), where:

  • S is a finite set of splicing rules over V.

  • A is a finite set of auxiliary strings over V. These auxiliary strings are to be used, together with the existing strings, in the splicing steps of the processors. Auxiliary strings are available at any moment.

  • \(PI,FI\subseteq V\) are the sets of permitting and forbidding symbols, respectively, which form the input filter of the processor.

  • \(PO,FO\subseteq V\) are the sets of permitting and forbidding symbols, respectively, which form the output filter of the processor.

The set of splicing processors over V is denoted by \(SP_V\).

A network of splicing processors is a 9-tuple \(\varGamma =(V,U,\langle ,\rangle ,G,{\mathcal {N}},\alpha ,{\underline{In}}\), \({\underline{Halt}})\), where:

  • V and U are the input and network alphabet, respectively, \(V\subseteq U\), and \(\langle ,\rangle \in U\setminus V\) are two special symbols.

  • \(G=(X_G,E_G)\) is an undirected graph without loops with the set of nodes \(X_G\) and the set of edges \(E_G\). Each edge is given in the form of a binary set. G is called the underlying graph of the network.

  • \({\mathcal {N}}:X_G\longrightarrow SP_U\) is a mapping, which associates with each node \(x\in X_G\) the splicing processor \({\mathcal {N}}(x)=(S_x,A_x,PI_x,FI_x,PO_x,FO_x)\).

  • \(\alpha : X_G\longrightarrow \{(s),(w)\}\) defines the type of the filters of a node.

  • \({\underline{In}}, {\underline{Halt}} \in X_G\) are the input and the halting node of \(\varGamma\), respectively.

The size of an NSP \(\varGamma\) is defined as the number of nodes of the graph, i.e., \(card(X_G)\). A configuration of an NSP \(\varGamma\) is a mapping \(C:X_G\rightarrow 2^{U^*}\), which associates a set of strings with every node of the graph. Although a configuration is a multiset of strings, each one appearing in an arbitrary number of copies, for the sake of simplicity, we work with the support of this multiset. A configuration can be seen as the sets of strings, except the auxiliary ones, which are present in the nodes at some moment. For a string \(w\in V^*\), we define the initial configuration of \(\varGamma\) on w by \(C^{(w)}_0({\underline{In}}) = \{\langle w \rangle \}\) and \(C^{(w)}_0 (x) =\emptyset\) for all other \(x\in X_G\).

A configuration is followed by another configuration either by a splicing step or by a communication step. A configuration \(C'\) follows a configuration C by a splicing step if each component \(C'(x)\), for some node x, is the result of applying all the splicing rules in the set \(S_x\) that can be applied to the strings in the set in C(x) together with those in \(A_x\). Formally, configuration \(C'\) follows the configuration C by a splicing step, written as \(C\Rightarrow C'\), iff for all \(x\in X_G\), the following holds:

$$\begin{aligned} C'(x)=\sigma _{S_x}(C(x)\cup A_x). \end{aligned}$$

In a communication step, the following actions take place simultaneously for every node x:

  1. (i)

    all the strings that can pass the output filter of a node are sent out of that node;

  2. (ii)

    all the strings that left their nodes enter all the nodes connected to their original ones, provided that they can pass the input filter of the receiving nodes.

Note that, according to this definition, those strings that are sent out of a node and cannot pass the input filter of any node are lost. Formally, a configuration \(C'\) follows a configuration C by a communication step (we write \(C'\models C)\) iff for all \(x\in X_G\)

$$\begin{aligned}&C'(x)=(C(x)-\varphi ^{\alpha (x)}(C(x),PO_x,FO_x))\cup \\&\quad \displaystyle {\bigcup _{\{x,y\}\in E_G}(\varphi ^{\alpha (y)}(C(y),PO_y,FO_y)\cap \varphi ^{\alpha (x)}(C(y),PI_x,FI_x))} \end{aligned}$$

holds. For an NSP \(\varGamma\), a computation on an input string w is defined as a sequence of configurations \(C^{(w)}_0\), \(C^{(w)}_1\), \(C^{(w)}_2,...\), where \(C^{(w)}_0\) is the initial configuration of \(\varGamma\) on w, \(C^{(w)}_{2i}\Rightarrow C^{(w)}_{2i+1}\) and \(C^{(w)}_{2i+1}\models C^{(w)}_{2i+2}\), for all \(i\ge 0\). A computation on an input string w halts if there exists \(k\ge 1\) such that \(C^{(w)}_k({\underline{Halt}})\) is non-empty. Such a computation is called an accepting computation. As the halting node is used just for ending the computation, we shall consider that \(S_{{\underline{Halt}}}=A_{{\underline{Halt}}}=\emptyset\). Furthermore, because as soon as a string enters \({\underline{Halt}}\), the computation halts and no string goes out, we may also consider that \(PO_{{\underline{Halt}}}=FO_{{\underline{Halt}}}=\emptyset\).

The language accepted by \(\varGamma\) is defined as

$$\begin{aligned}&L(\varGamma )=\{z\in V^*\mid \hbox { the computation of }\varGamma \\&\quad \hbox {on} z \hbox {is an accepting computation}\}. \end{aligned}$$

Given an NSP \(\varGamma\) with the input alphabet V, we define the following computational complexity measure. The time complexity of the finite computation \(C_0^{(x)}\), \(C_1^{(x)}\), \(C_2^{(x)}\), \(\dots C_m^{(x)}\) of \(\varGamma\) on \(x\in V^*\) is denoted by \(Time_{\varGamma }(x)\) and equals m. The time complexity of \(\varGamma\) is the partial function from N to N,

$$\begin{aligned} Time_{\varGamma }(n)=\text{ max }\{Time_{\varGamma }(x)\mid x\in L(\varGamma ), |x|=n\}. \end{aligned}$$

3 Complexity preserving simulations

3.1 Simulating arbitrary NSP by complete NSP

Theorem 1

For every NSP \(\varGamma\) one can construct a complete NSP \(\varGamma '\) such that the following conditions are satisfied:

  1. 1.

    \(L(\varGamma )=L(\varGamma ')\).

  2. 2.

    \(Time_{\varGamma '}(n) \in {\mathcal {O}}(Time_{\varGamma } (n))\).

  3. 3.

    \(size(\varGamma ')=size(\varGamma )+2\).

Proof

Let \(\varGamma =(V,U,<,>,G,{{\mathcal {N}}},\beta ,{\underline{In}},{\underline{Halt}})\) be a NSP with the underlying graph \(G=(X_G,E_G)\) and \(X_G=\{x_1,x_2,\dots ,x_n\}\) for some \(n\ge 2\); \(x_1 \equiv {\underline{In}}\) and \(x_n \equiv {\underline{Halt}}\). We construct the NSP \(\varGamma '=(V',U',<,>,G',{{\mathcal {N}}'},\beta ',\underline{In'},\underline{Halt'})\), where

$$\begin{aligned} V' = V, \ \ \ U'= U \cup T \cup \{\#\}, \ \ \ T = \{ t_i \mid 0\le i\le n\}, \end{aligned}$$

\(G'\) is the complete graph \(K_{n+2}\) represented in Fig. 2, with the nodes \(\{\underline{In'}\), \(x_{comp}\), \(x_1^s\), \(x_2^s\), \(\dots\), \(x_{n-1}^s\), \(\underline{Halt'}\}\) defined as follows:

  • node \(\underline{In'}\):

    $$\begin{aligned} \begin{array}{ll} S = \{[(a,>);(\#,>t_0)]\mid a \in U\}, &{} A = \{\#>t_0\},\\ PI = U\setminus (T\cup \{\#\}), &{} FI = \{\#\} \cup T,\\ PO = U\setminus \{\#\}, &{} FO = \{\#\},\\ \beta ' = (w) &{} \end{array} \end{aligned}$$
  • node \(x_{comp}\):

    $$\begin{aligned} \begin{array}{ll} S = \{[(\varepsilon ,t_j);(\#,t_i)](\mid 1 \le j \ne i \le n) \wedge &{} A = \{ \#t_i \mid 1 \le i \le n \},\\ (\{x_i,x_j\} \in E_G)\} \cup \{[(\varepsilon ,t_0);(\#,t_1)]\}, &{} \\ PI = U\setminus \{\#\}, &{} FI = \{\#\},\\ PO = U\setminus \{\#\}, &{} FO = \{\#\},\\ \beta ' = (w) &{} \end{array} \end{aligned}$$
  • node \(x_i^s, 1 \le i \le n-1\):

    $$\begin{aligned} \begin{array}{ll} S = S_{x_i}, &{} A = \{ zt_i \mid z \in A_{x_i} \},\\ PI = PI_{x_i}, &{} FI = FI_{x_i} \cup T \setminus \{ t_i \},\\ PO = PO_{x_i}, &{} FO = FO_{x_i},\\ \beta ' = \beta (x_i) &{} \end{array} \end{aligned}$$
  • node \(\underline{Halt'}\):

    $$\begin{aligned} \begin{array}{ll} S = \emptyset , &{} A = \emptyset ,\\ PI = PI_{{\underline{Halt}}}, &{} FI = FI_{{\underline{Halt}}},\\ PO = \emptyset , &{} FO = \emptyset ,\\ \beta = \beta ({\underline{Halt}}). \end{array} \end{aligned}$$
Fig. 2
figure 2

The underlying graph of \(\varGamma '\)

We now analyze a computation of \(\varGamma '\) on the input string \(<w>\). In the input node \(\underline{In'}\), the symbol \(t_0\) is attached to the end of the string. Next, the symbol \(t_0\) is replaced by \(t_1\) in the node \(x_{comp}\). When it goes out, it can only enter \(x_1^s\) and the simulation of a computation in \(\varGamma\) starts. Thus, the string \(<w>t_1\) lies in \(x_1^s\), while the string \(<w>\) is found in \(x_1\), the input node of \(\varGamma\). More generally, we may assume that a string \(zt_i\) is found in a node \(x_i^s \in \varGamma '\) if and only if the corresponding string z lies in \(x_i \in \varGamma\), for all \(1\le i\le n-1\). Note that the strings can never return to \(\underline{In'}\) because of the input filter of this node.

Let \(x_i\) be a splicing node, where a rule [(ab); (uv)] is applied to w yielding \(w'\) and \(w''\). Then, the same rule is applied in \(x_i^s\) and strings of the form \(w't_i\) and \(w''t_i\) are produced. Indeed, since all the strings in \(A_{x_i^s}\) and any string entering \(x_i^s\) have the symbol \(t_i\) at the end, the splicing rule will always yield strings keeping the character \(t_i\) as the last one. Since both the node \(x_i\) and the node \(x_i^s\) have the same output filters and the produced strings only differ in this last character \(t_i\), it follows that a string can only leave \(x_i^s\) if and only if the original counterpart can exit \(x_i\). Once it leaves, the string returns to \(x_{comp}\) and the character \(t_i\) is replaced with \(t_j\) characters in different copies, provided that \(\{x_i,x_j\} \in E_G\). Each of the copies is sent to the corresponding connected node \(x_j\) and the process described above restarts. It immediately follows that \(L(\varGamma ')=L(\varGamma )\).

It is easy to notice that \(Time_{\varGamma '}(w)=2Time_{\varGamma }(w)\) for every \(w\in L(\varGamma )\), hence the second statement is proved.

Finally, this construction needs two more nodes, therefore \(size(\varGamma ')=size(\varGamma )+2\). \(\square\)

3.2 Simulating arbitrary NSP by star NSP

Theorem 2

For every NSP \(\varGamma\) one can construct a star NSP \(\varGamma '\) such that the following conditions are satisfied:

  1. 1.

    \(L(\varGamma )=L(\varGamma ')\).

  2. 2.

    \(Time_{\varGamma '}(n) \in {\mathcal {O}}(Time_{\varGamma } (n))\).

  3. 3.

    \(size(\varGamma ')=size(\varGamma )+2\).

Proof

The simulation is identical to the one for complete graphs. The node \(x_{comp}\) is set as the center of the star network, while all the other nodes defined in the previous proof are connected to it, as shown in Fig. 3.

Fig. 3
figure 3

The underlying graph of \(\varGamma '\)

Clearly, each computation in \(\varGamma '\) goes as in the previous construction, hence all the statement of the theorem follow. \(\square\)

3.3 Simulating arbitrary NSP by grid NSP

Theorem 3

For every NSP \(\varGamma\) one can construct a grid NSP \(\varGamma '\) such that the following conditions are satisfied:

  1. 1.

    \(L(\varGamma )=L(\varGamma ')\).

  2. 2.

    \(Time_{\varGamma '}(n) \in {\mathcal {O}}(Time_{\varGamma } (n))\).

  3. 3.

    \(size(\varGamma ')=3size(\varGamma )+3\).

Proof

Let \(\varGamma =(V,U,<,>,G,{{\mathcal {N}}},\beta ,{\underline{In}},{\underline{Halt}})\) be a NSP with the underlying graph \(G=(X_G,E_G)\) and \(X_G=\{x_1,x_2,\dots ,x_n\}\) for some \(n\ge 2\); \(x_1 \equiv {\underline{In}}\) and \(x_n \equiv {\underline{Halt}}\). We construct the NSP \(\varGamma '=(V,U',<,>,G',{{\mathcal {N}}'},\beta ',\underline{In'},\underline{Halt'})\), where

$$\begin{aligned} U'= & {} U \cup T \cup \{\#\},\\ T= & {} \{ t_i, t_i', t_i^{1}, t_i^{2} \mid 1 \le i\le n\},\\ G'= & {} (X_{G'},E_{G'}), \\ X_{G'}= & {} \{\underline{In'},\underline{Halt'}, D\}\cup \{x_i^s\mid 2\le i\le n-1\}\cup \{x_i^{comp},x_i^{connect}\mid 1\le i\le n\}. \end{aligned}$$

The underlying graph of the network \(\varGamma '\) is the grid graph with width 3 and height \(n+1\) from Fig. 4 below and its nodes are defined as follows:

  • node \(\underline{In'}\):

    $$\begin{aligned} \begin{array}{ll} S = \{[(a,>);(\#,>t_1)]\mid a \in U \}, &{} A = \{\#>t_1\},\\ PI = U'\setminus (\{\#\} \cup T), &{} FI = \{\#\} \cup T,\\ PO = U'\setminus \{\#\}, &{} FO = \{\#\},\\ \beta ' = (w) &{} \end{array} \end{aligned}$$
  • nodes \(x_i^{s}, 1\le i\le n-1\):

    $$\begin{aligned} \begin{array}{ll} S = S_{x_i}, &{} A = \{ zt_i \mid z \in A_{x_i} \},\\ PI = PI_{x_i}, &{} FI = FI_{x_i} \cup T \setminus \{t_i\} \cup \{\#\},\\ PO = PO_{x_i}, &{} FO = FO_{x_i},\\ \beta ' = \beta (x_i) &{} \end{array} \end{aligned}$$
  • nodes D, \(D'\):

    $$\begin{aligned} \begin{array}{ll} S = \emptyset , &{} A = \emptyset ,\\ PI = \emptyset , &{} FI = U',\\ PO = \emptyset , &{} FO = U',\\ \beta ' = (s) &{} \end{array} \end{aligned}$$
  • nodes \(x_i^{comp}, 1 \le i \le n\):

    \(S = \{[(\varepsilon ,t_i');(\#,t_i)] \} \cup\) \(\left\{ \begin{array}{ll} \{[(\varepsilon ,t_i);(\#,t_j^{1})] \mid 1 \le i \ne j \le n \wedge \{x_i,x_j\}\in E_G \}, \text{ if } \text{ i } \text{ is } \text{ an } \text{ even } \text{ number, } \\ \{ [(\varepsilon ,t_i);(\#,t_j^{2})] \mid 1 \le i \ne j \le n \wedge \{x_i,x_j\}\in E_G \}, \text{ if } \text{ i } \text{ is } \text{ an } \text{ odd } \text{ number, } \\ \end{array}\right.\) \(A = \left\{ \begin{array}{ll} \{ \#t_j^{1} \mid 1 \le i \ne j \le n \wedge \{x_i,x_j\} \in E_G \}, \text{ if } \text{ i } \text{ is } \text{ an } \text{ even } \text{ number, } \\ \{ \#t_j^{2} \mid 1 \le i \ne j \le n \wedge \{x_i,x_j\} \in E_G \}, \text{ if } \text{ i } \text{ is } \text{ an } \text{ odd } \text{ number, } \\ \end{array}\right.\) \(\begin{array}{ll} PI =\{t_i, t_i'\} , &{} FI = \{\#\} ,\\ PO = U'\setminus \{\#\} , &{} FO = \{\#\} ,\\ \beta ' = (w) &{} \end{array}\)

  • \(\underline{\text {nodes }x_i^{connect} \mid 1 \le i \le n }\): \(S = \{ [(\varepsilon ,t_i^{1});(\#,t_i')] \} \cup \{ [(\varepsilon ,t_i^{2});(\#,t_i')] \} \cup\) \(\left\{ \begin{array}{ll} \{ [(\varepsilon ,t_j^{1});(\#,t_j^{2})] \mid 1 \le i \ne j \le n \}, \text{ if } \text{ i } \text{ is } \text{ an } \text{ even } \text{ number, } \\ \{ [(\varepsilon ,t_j^{2});(\#,t_j^{1})] \mid 1 \le i \ne j \le n \}, \text{ if } \text{ i } \text{ is } \text{ an } \text{ odd } \text{ number, } \\ \end{array}\right.\) \(A = \{ \#t_i' \} \cup \left\{ \begin{array}{ll} \{ \#t_j^{2} \mid 1 \le i \ne j \le n \}, \text{ if } \text{ i } \text{ is } \text{ an } \text{ even } \text{ number, } \\ \{ \#t_j^{1} \mid 1 \le i \ne j \le n \}, \text{ if } \text{ i } \text{ is } \text{ an } \text{ odd } \text{ number, } \\ \end{array}\right.\) \(PI = \left\{ \begin{array}{ll} \{ t_j^{1} \mid 1 \le j \le n \}, \text{ if } \text{ i } \text{ is } \text{ an } \text{ even } \text{ number, } \\ \{ t_j^{2} \mid 1 \le j \le n \}, \text{ if } \text{ i } \text{ is } \text{ an } \text{ odd } \text{ number, } \\ \end{array}\right.\) \(FI = \{\#\}\), \(PO = U'\), \(FO = \emptyset\), \(\beta = (w)\)

Fig. 4
figure 4

The underlying graph of \(\varGamma '\)

We now analyze a computation of \(\varGamma '\) on the input string \(<w>\). In the input node \(\underline{In'}\), the symbol \(t_1\) is attached at the end. Next, the string enters \(x_1^s\) and the simulation of a computation in \(\varGamma\) starts. Thus, the string \(<w>t_1\) lies in \(x_1^s\) while the string \(<w>\) is found in \(x_1\), the input node of \(\varGamma\). More generally, we may assume that a string \(zt_i\) is found in a node \(x_i^s \in \varGamma '\) if and only if the corresponding string z lies in \(x_i \in \varGamma\). Note that the strings cannot longer return to \(\underline{In'}\) because of its FI filter. Note that the node \(\underline{In'}\) and the nodes D and \(D'\) will not accept any string from now on because of their PI filters. Consequently, the first row can be disregarded for the rest of the computation.

Let \(x_i\) be a splicing node, where a rule [(ab); (uv)] is applied to w yielding \(w'\) and \(w''\). Then, the same rule is applied in \(x_i^s\) and strings of the form \(w't_i\) and \(w''t_i\) are produced. Indeed, since all the strings in \(A_{x_i^s}\) and any string entering \(x_i^s\) have the symbol \(t_i\) at the end, the splicing rule will always yield strings keeping the character \(t_i\) as the last one. Since both the node \(x_i\) and the node \(x_i^s\) have the same output filters and the produced strings only differ in this last character \(t_i\), it follows that a string can leave \(x_i^s\) if and only if the original counterpart can exit \(x_i\). Once it leaves the node, the string can only enter the linked node \(x_i^{comp}\) and, depending on if i is an odd or an even number, the character \(t_i\) is replaced with \(t_j^{1}\) or \(t_j^{2}\) characters in different copies, respectively, granted that \(\{x_i,x_j\} \in E_G\). Because of this last transformation, the yielded strings can only enter the node \(x_i^{connect}\). At this point, a string of the form \(wt_j^{1}\) or \(wt_j^{2}\) continues through the column of nodes \(x_i^{connect}\) until it reaches the node \(x_j^{connect}\). More precisely, in \(x_i^{connect}\) the symbols \(t_j^{1}\) and \(t_j^{2}\) are switched alternatively, forcing the string to go simultaneously to \(x_{i-1}^{connect}\) and \(x_{i+1}^{connect}\), provided that \(i-1\ge 1\), \(i+1\le n\). In this way, the string eventually arrives to the node \(x_j^{connect}\) and either the character \(t_j^{1}\) or the symbol \(t_j^{2}\) is replaced with \(t_j'\) blocking the string from continuing through the column of nodes \(x_i^{connect}\). Lastly, this last character is replaced by \(t_j\) in \(x_j^{comp}\) and the string enters the intended node \(x_j^s\), granted that it meets the requirements set by the input filters of this last node. Otherwise, it is lost. Summarizing, we consider a splicing step in \(\varGamma\), that produces a string \(z'\) from z in node \(x_i\), \(1\le i\le n\), which is further sent to \(x_j\), \(j>i\) (the case \(j<i\) is analogous). These two steps (splicing and communication) are simulated in \(\varGamma '\) by a series of splicing steps such that the string \(zt_i\) is transformed into \(z't_i\) in \(x_i^s\), then sent, via an itinerary that starts with the node \(x_i^{comp}\), continues with the nodes \(x_i^{connect}, x_{i+1}^{connect},\dots ,x_j^{connect}\), and finishes with the nodes \(x_j^{comp}\) and \(x_j^s\). Therefore, the induction step is valid. From this reasoning, we infer that \(L(\varGamma )=L(\varGamma ')\). Following closely the explanations, we note that each splicing step in the node \(x_i\) of \(\varGamma\) is simulated by at most \(n+3\) splicing steps in \(\varGamma '\). This is done as follows: one step in \(x_i^s\), followed by one step in \(x_i^{comp}\), and then at most n splicing steps in the nodes from \(x_i^{connect}\) to \(x_j^{connect}\). Finally, one more step is done in \(x_j^{comp}\) before the string enters \(x_j^s\). Since the size of \(\varGamma\) is constant, it follows the second statement of the theorem. The third statement is immediately valid from the Fig. 4. \(\square\)

4 Conclusions and further work

Motivated by possible implementations, we have investigated the possibility of transforming an NSP with an arbitrary underlying graph into an equivalent NSP (the two have the same computational power) with an underlying graph of a predefined topology. We have considered here the complete, star, and grid graphs. We have proposed constructions for these transformations such that: (i) these constructions do not increase the time complexity, and (ii) these constructions do not increase the network size by more than a constant. The protocol of communication of the networks considered here is based on some random context conditions. We would like to investigate whether or not similar constructions can be obtained for networks of polarized splicing processors, where the protocol of communication is regulated by the polarization of the nodes and a mapping that defines the polarization of data.