Skip to main content
Log in

Matrix representation and simulation algorithm of spiking neural P systems with structural plasticity

  • Regular Paper
  • Published:
Journal of Membrane Computing Aims and scope Submit manuscript

Abstract

In this paper, we create a matrix representation for spiking neural P systems with structural plasticity (SNPSP, for short), taking inspiration from existing algorithms and representations for related variants. Using our matrix representation, we provide a simulation algorithm for SNPSP systems. We prove that the algorithm correctly simulates an SNPSP system: our representation and algorithm are able to capture the syntax and semantics of SNPSP systems, e.g. plasticity rules, dynamism in the synapse set. Analyses of the time and space complexity of our algorithm show that its implementation can benefit using parallel computers. Our representation and simulation algorithm can be useful when implementing SNPSP systems and related variants with a dynamic topology, in software or hardware.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Cabarle, F. G. C., Adorna, H. N., Jiang, M., & Zeng, X. (2017). Spiking neural p systems with scheduled synapses. IEEE Transactions on Nanobioscience, 16(8), 792–801.

    Article  Google Scholar 

  2. Cabarle, F. G. C., Adorna, H. N., Martínez-del-Amor, M. Á., & Pérez-Jiménez, M. J. (2012). Improving GPU simulations of spiking neural P systems. ROMJIST, 15(1), 5–20.

    Google Scholar 

  3. Cabarle, F.G.C., Adorna, H.N., & Pérez-Jiménez, M.J. (2015) Asynchronous spiking neural P systems with structural plasticity. In C. S. Calude, M. J. Dinneen (Eds.), International conference on unconventional computation and natural computation (pp. 132–143). Cham: Springer.

    Chapter  Google Scholar 

  4. Cabarle, F. G. C., Adorna, H. N., Pérez-Jiménez, M. J., & Song, T. (2015). Spiking neural p systems with structural plasticity. Neural Computing and Applications, 26(8), 1905–1917.

    Article  MATH  Google Scholar 

  5. Carandang, J. P., Cabarle, F. G. C., Adorna, H. N., Hernandez, N. H. S., & Martinez-del Amor, M. A. (2019). Handling non-determinism in spiking neural P systems: Algorithms and simulations. Fundamenta Informaticae, 164, 139–155. https://doi.org/10.3233/FI-2019-1759.

    Article  MathSciNet  MATH  Google Scholar 

  6. Carandang, J. P. A., Villaflores, J. M. B., Cabarle, F. G. C., Adorna, H. N., & Martinez-del Amor, M. A. (2017). Cusnp: Spiking neural p systems simulators in cuda. Romanian Journal for Information Science and Technology (ROMJIST), 20(1), 57–70.

    Google Scholar 

  7. Chen, H., Freund, R., Ionescu, M., Păun, G., & Pérez-Jiménez, M. J. (2007). On string languages generated by spiking neural p systems. Fundamenta Informaticae, 75(1–4), 141–162.

    MathSciNet  MATH  Google Scholar 

  8. Chen, H., Ionescu, M., Ishdorj, T. O., Păun, A., Păun, G., & Pérez-Jiménez, M. J. (2008). Spiking neural p systems with extended rules: Universality and languages. Natural Computing, 7(2), 147–166.

    Article  MathSciNet  MATH  Google Scholar 

  9. Dela Cruz, R. T., Cailipan, D., Cabarle, F. G. C., Hernandez, N., Buño, K., Adorna, H., & Carandang, J. (2018) Matrix representation and simulation algorithm for spiking neural p systems with rules on synapses. In Proceedings of 18th Philippine Computing Science Congress (PCSC2018), 15–17 March, 2018, Cagayan de Oro City, Misamis Oriental, Philippines (pp. 104–112). https://sites.google.com/site/2018pcsc/proceedings. Accessed 4 Aug 2019.

  10. Dela Cruz, R. T., Jimenez, Z. B., Cabarle, F. G. C., Hernandez, N., Buño, K., Adorna, H., & Carandang, J. (2018) Matrix representation of spiking neural p systems with structural plasticity. In Proceedings of 18th Philippine Computing Science Congress (PCSC2018), 15–17 March, 2018, Cagayan de Oro City, Misamis Oriental, Philippines (pp. 104–112). https://sites.google.com/site/2018pcsc/proceedings. Accessed 4 Aug 2019

  11. Ionescu, M., Păun, G., & Yokomori, T. (2006). Spiking neural p systems. Fundamenta informaticae, 71(2, 3), 279–308.

    MathSciNet  MATH  Google Scholar 

  12. Ishdorj, T. O., & Leporati, A. (2008). Uniform solutions to sat and 3-sat by spiking neural p systems with pre-computed resources. Natural Computing, 7(4), 519–534.

    Article  MathSciNet  MATH  Google Scholar 

  13. Ishdorj, T. O., Leporati, A., Pan, L., Zeng, X., & Zhang, X. (2010). Deterministic solutions to qsat and q3sat by spiking neural p systems with pre-computed resources. Theoretical Computer Science, 411(25), 2345–2358.

    Article  MathSciNet  MATH  Google Scholar 

  14. Leporati, A., Mauri, G., Zandron, C., Păun, G., & Pérez-Jiménez, M. J. (2009). Uniform solutions to sat and subset sum by spiking neural p systems. Natural Computing, 8(4), 681.

    Article  MathSciNet  MATH  Google Scholar 

  15. Martínez-del-Amor, M. Á., Orellana-Martín, D., Cabarle, F. G. C., Pérez-Jiménez, M. J., & Adorna, H. N. (2017) Sparse-matrix representation of spiking neural P systems for GPU. In C. Graciani, G. Păun, A. Riscos-Núñez, L. Valencia-Cabrera (Eds.), Proceedings of 15th brainstorming week on membrane computing (pp. 161–170). Seville: Fénix Editora. http://www.gcn.us.es/15bwmc_proceedings.

  16. Pan, L., & Păun, G. (2009). Spiking neural p systems with anti-spikes. International Journal of Computers Communications and Control, 4(3), 273–282.

    Article  Google Scholar 

  17. Pan, L., Păun, G., & Pérez-Jiménez, M. J. (2011). Spiking neural p systems with neuron division and budding. Science China Information Sciences, 54(8), 1596.

    Article  MathSciNet  MATH  Google Scholar 

  18. Pan, L., Wang, J., & Hoogeboom, H. J. (2012). Spiking neural p systems with astrocytes. Neural Computation, 24(3), 805–825.

    Article  MathSciNet  MATH  Google Scholar 

  19. Pan, L., Zeng, X., Zhang, X., & Jiang, Y. (2012). Spiking neural p systems with weighted synapses. Neural Processing Letters, 35(1), 13–27.

    Article  Google Scholar 

  20. Păun, G. (2007). Spiking neural p systems. a tutorial. Bulletin of the European Association for Theoretial Computer Science 91, 145–159. http://cs.ioc.ee/yik/schools/win2007/paun/snppalmse.pdf. Accessed 4 Aug 2019.

  21. Paun, G. (2007). Spiking neural p systems with astrocyte-like control. Journal of UCS, 13(11), 1707–1721.

    MathSciNet  Google Scholar 

  22. Peng, H., Wang, J., Pérez-Jiménez, M. J., Wang, H., Shao, J., & Wang, T. (2013). Fuzzy reasoning spiking neural p system for fault diagnosis. Information Sciences, 235, 106–116.

    Article  MathSciNet  MATH  Google Scholar 

  23. Song, T., Pan, L., & Păun, G. (2014). Spiking neural p systems with rules on synapses. Theoretical Computer Science, 529, 82–95.

    Article  MathSciNet  MATH  Google Scholar 

  24. Song, T., Rodríguez-Patón, A., Zheng, P., & Zeng, X. (2017). Spiking neural p systems with colored spikes. IEEE Transactions on Cognitive and Developmental Systems, 10(4), 1106–1115.

    Article  Google Scholar 

  25. Wang, J., Hoogeboom, H. J., Pan, L., Păun, G., & Pérez-Jiménez, M. J. (2010). Spiking neural p systems with weights. Neural Computation, 22(10), 2615–2646.

    Article  MathSciNet  MATH  Google Scholar 

  26. Wang, T., Zhang, G., Zhao, J., He, Z., Wang, J., & Pérez-Jiménez, M. J. (2014). Fault diagnosis of electric power systems based on fuzzy reasoning spiking neural p systems. IEEE Transactions on Power Systems, 30(3), 1182–1194.

    Article  Google Scholar 

  27. Zeng, X., Adorna, H. N., Martínez-del Amor, M. Á., Pan, L., & Pérez-Jiménez, M. J. (2010). Matrix representation of spiking neural p systems. In M. Gheorghe, T. Hinze, G. Păun, G. Rozenberg, A. Salomaa (Eds.), International conference on membrane computing (pp. 377–391). Springer.

  28. Zhang, G., Rong, H., Neri, F., & Pérez-Jiménez, M. J. (2014). An optimization spiking neural p system for approximately solving combinatorial optimization problems. International Journal of Neural Systems, 24(05), 1440006.

    Article  Google Scholar 

Download references

Acknowledgements

R.T.A. de la Cruz is supported by a graduate scholarship from the DOST-ERDT project. F.G.C. Cabarle thanks the support from the DOST-ERDT project; the Dean Ruben A. Garcia PCA AY2018–2019, and an RLC AY2018–2019 grant of the OVCRD, both from UP Diliman. H. Adorna would like to appreciate and thank the support granted by UPD-OVCRD RCL grant, ERDT Research Program of the College of Engineering, UP Diliman and the Semirara Mining Corporation Professorial Chair for Computer Science. N. Hernandez is supported by the Vea Technology for All professorial chair. The work of X. Zeng was supported by the National Natural Science Foundation of China (Grant Nos. 61472333, 61772441, 61472335, 61672033, 61425002, 61872309, 61771331), Project of marine economic innovation and development in Xiamen (No. 16PFW034SF02), Natural Science Foundation of the Higher Education Institutions of Fujian Province (No. JZ160400). K. Buño would like to thank Dr. Olegario G. Villoria Jr. Professorial Chair on Transportation/Logistics since 2018 until present.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Francis George C. Cabarle.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Theorem proofs

Theorem proofs

Proof for Theorem 1

By definition, \(Rule ^{(k)} = ( Fi ^{(k)}, Ti ^{(k)}, os^{(k)}, sp^{(k)})\). First, given that the input \(Conf ^{(k-1)}\) is of the previous time step (fed into the function as \(Conf ^{(k)}\)), we first increment k at Line 1 for appropriate usage in the resulting rule node. Thus, we know that the newRules() constructor at Line 18 is of the right time step. Line 4 evaluates a formula and assigns it to a temporary variable Sp, for spikes. The formula consists of two parts, the multiplication and the subtraction. It goes as follows:

$$\begin{aligned} Sp&= \left( C^{(k-1)} \times Sr _{\mathcal{R}}^T\right) - P_{\mathcal{R}} \\&= \left( \Big [c_i^{(k-1)}\Big ]_m \times \Big [sr_{{\mathcal{R}},i,j}\Big ]_{m \times {r_{{\mathcal{R}}}}} \right) - \Big [p_{{\mathcal{R}},i}\Big ]_{{r_{{\mathcal{R}}}}} = \left( \bigg [ \sum _i{c_i^{(k-1)} sr_{{\mathcal{R}},i,j}} \bigg ]_{{r_{{\mathcal{R}}}}} \right) - \bigg [p_{{\mathcal{R}},i}\bigg ]_{{r_{{\mathcal{R}}}}} \end{aligned}$$

Since \(sr_{{\mathcal{R}},i,j} = 1\) if rule \({r_{{\mathcal{R}},i}}\) belongs to neuron \(\sigma _j\) (0 otherwise), and \(c_i^{(k)}\) is the number of spikes in neuron \(\sigma _i\) at time k, we have

$$\begin{aligned} c_i^{(k-1)} sr_{{\mathcal{R}},i,j} = {\left\{ \begin{array}{ll} c_i^{(k-1)}, &{} {r_{{\mathcal{R}},j}}\in R_i; \\ 0, &{} {\text {otherwise.}} \end{array}\right. } \end{aligned}$$

Also noting that rules can only be associated with one neuron, we can then conclude that \(\sum _i{c_i^{(k-1)} sr_{{\mathcal{R}},i,j}}\) is the number of spikes in the source neuron of rule \({r_{{\mathcal{R}},j}}\). We let \(rsp_{{\mathcal{R}},i}\) be this number. Now that we know each rule’s respective source neuron spike count; we can now use the P and Q vectors to check compatibility with the rule’s respective regular expression. Thus

$$\begin{aligned} Sp&= \left( \bigg [ \sum _i{c_i^{(k-1)} sr_{{\mathcal{R}},i,j}} \bigg ]_{{r_{{\mathcal{R}}}}} \right) - \bigg [p_{{\mathcal{R}},i}\bigg ]_{{r_{{\mathcal{R}}}}} \\&= \Big [ rsp_{{\mathcal{R}},i} \Big ]_{{r_{{\mathcal{R}}}}} - \Big [p_{{\mathcal{R}},i}\Big ]_{{r_{{\mathcal{R}}}}} = \Big [ rsp_{{\mathcal{R}},i} - p_{{\mathcal{R}},i}\Big ]_{{r_{{\mathcal{R}}}}} \end{aligned}$$

With s being the current spike count of a certain neuron, we need to match \(a^s\) with \(a^p(a^q)*=a^{p+qn}\), and thus we need to make sure \(s=p+qn\) for some nonnegative integers p,q, and n. So we first subtract p in Line 4, and check for qn in the if clause of Line 7. There are two cases for \(a^s\) to match the regular expression. First, if there is a non-zero q for the rule, there is no problem using \(Sp_i \mathbf mod q_{{\mathcal{R}},i} = 0\) (so long as \(Sp_i\) is not negative, in which case \(rsp_{{\mathcal{R}},i} - p_{{\mathcal{R}},i} = s - p < 0\)). The other case would be if \(q=0\), in which case the regular expression is of the form \(a^p\). Thus, \(p+qn=p\), a constant, and so \(s=p+qn\) can only be satisfied if \(s-p = rsp_{{\mathcal{R}},i} - p_{{\mathcal{R}},i} = 0\). If the regular expression is matched, \(fi_{\mathcal{R}},i^{(k)} = 1\); otherwise, \(= 0\). Since the loop of Line 6 iterates over all the rules of type \({\mathcal{R}}\), and that \({\mathcal{R}}\) goes through both \({\mathcal{P}}\) and \({\mathcal{S}}\) (Line 3), these two loops go over all the rules. Thus, \(Fi ^{(k)}\) now tells us which rules have matched their regular expressions and can fire.

\(os^{(k)}\) would by default copy the value from the previous time step, \(os^{(k)}\), while \(sp^{(k)}\) would stay at 0. The former would only increase and the latter be set to 1 if an output spike was discovered to be sent to the environment at time k. This condition is checked by the if clause at Line 9, which would only be reached if rule \({r_{{\mathcal{R}},i}}\) were to fire at time k for the given values of \({\mathcal{R}}\) and i. Thus, we only need to check if this rule sent an output spike. Since only spiking rules can send spikes to the environment, the condition at Line 9 should check if the given rule was a spiking rule (\({\mathcal{R}}={\mathcal{S}}\)) and if the current rule belonged to the output neuron (\(Sr_{{\mathcal{S}},out,i} = 1\)). Thus, \(os^{(k)}\) and \(sp^{(k)}\) are now computed correctly.

Lastly, the timer matrix \(Ti ^{(k)}\) would only be touched in the for loop of Line 12. For each plasticity rule, we first check if the rule already fired at the previous time step (Line 13) and is still executing at the current time step (as with the ± and \(\mp\) rules). This could be checked by looking for a 1 in the primed timers of the said rule from the previous time step (\(Ti_{i'}^{(k-1)}\)), since the timers have already counted down after initial rule firing. \(fi_{{{\mathcal{P}}},i}^{(k)}\) is simply marked as 0 since the rule is not allowed to fire anew if it is still executing, and just copies the previous primed timers onto the current unprimed timers. Otherwise, if the rule is not to execute a second operation at the current time step, we check if it fired anew at the current time step (Line 9). Since \({ Fi }_{{\mathcal{R}}}^{(k)}\) now shows which rules are applicable (unless ongoing execution), we can now be sure that the rules will be applied at time step k and thus we start the timer (Line 17). Given that, we are now sure that \({ Fi }^{(k)}\) and \(Ti ^{(k)}\) are now computed correctly.

Therefore, we are now sure that \(os^{(k)}\), \(sp^{(k)}\), \({ Fi }^{(k)}\), and \({ Ti }^{(k)}\) are computed correctly. newRule() is thus sure to be fed the correct arguments, and will return the correct rule node. \(\square\)

Proof for Theorem 2

Here, we return a list of all possible synapse nodes \(Syn ^{(k)}\). getCandidates() has not been specified in this paper and is assumed to return a list of all possible combinations of created/deleted synapses based on permutations of destination neurons and synapse counts of applicable plasticity rules. Given this, we are ensured that \(Sy _+^{(k)}\) and \(Sy _-^{(k)}\) are the appropriate synapse creation and deletion matrices of each synapse node to be created. Thus, \(Sy _\Delta ^{(k)}\) would then hold the appropriate synapse change matrix for the same synapse node and would thus be appropriately passed onto the constructor for \(Syn ^{(k)}\) and be included in the return list. Therefore, getSyns() returns the correctly computed synapse nodes appropriate for the given rule node. \(\square\)

Proof for Theorem 3

Given the definitions of \(Fi _{\mathcal{S}}\), \(Sr_{\mathcal{S}}\), and \(Sy\), we have

$$\begin{aligned} G_{\mathcal{S}}^{(k)}&\overset{?}{=} Fi _{\mathcal{S}}^{(k)} \times Sr_{\mathcal{S}} \times Sy ^{(k)} \\&= \Big [fi_{{\mathcal{S}},i}^{(k)}\Big ]_{{r_{{\mathcal{R}}}}} \times \Big [sr_{{\mathcal{S}},i,j}\Big ]_{{r_{{\mathcal{R}}}}\times m} \times \Big [sy_{i,j}^{(k)}\Big ]_{m \times m} = \Big [\sum _i{fi_{{\mathcal{S}},i}^{(k)} sr_{{\mathcal{S}},i,j}}\Big ]_m \times \Big [sy_{i,j}^{(k)}\Big ]_{m \times m} \end{aligned}$$

Since \(fi_{{\mathcal{S}},i}^{(k)} = 1\) if rule \({r_{{\mathcal{S}},i}}\) has spiked at time k (0 otherwise), and \(sr_{{\mathcal{S}},i,j} = 1\) if \({r_{{\mathcal{S}},i}}\) belongs to neuron \(\sigma _j\) (0 otherwise), we have

Thus, \(\sum _i{fi_{{\mathcal{S}},i}^{(k)} sr_{{\mathcal{S}},i,j}}\) is the number of spiking rules that have spiked at time k from neuron \(\sigma _j\). However, given that we have restricted neurons to only fire a maximum of one rule each, the value of this summation will only ever be 0 or 1, only indicating whether the neuron had a spiking rule fire or not. Continuing further, \(sy_{i,j}^{(k)} = 1\) if neuron \(\sigma _i\) is connected to \(\sigma _j\) at time k (0 otherwise), so

$$\begin{aligned} G_{\mathcal{S}}^{(k)}&\overset{?}{=}\Big [\sum _i{fi_{{\mathcal{S}},i}^{(k)} sr_{{\mathcal{S}},i,j}}\Big ]_m \times \Big [sy_{i,j}^{(k)}\Big ]_{m \times m} \\&= \Big [ \sigma _{i} \overset{\Sigma s}{ \underset{(k)}{\rightarrow } } \sigma _{} \Big ]_m \times \Big [sy_{i,j}^{(k)}\Big ]_{m \times m} = \Big [ \sigma _{i} \overset{s}{ \underset{(k)}{\rightarrow } } \sigma _{} \Big ]_m \times \Big [ \sigma _{i} \overset{}{ \underset{(k)}{\rightarrow } } \sigma _{j} \Big ]_{m \times m} \\&= \Big [ \sum _i{\Big (\Big ( \sigma _{i} \overset{s}{ \underset{(k)}{\rightarrow } } \sigma _{} \Big )\Big ( \sigma _{i} \overset{}{ \underset{(k)}{\rightarrow } } \sigma _{j} \Big )\Big )} \Big ]_m = \Big [ \sum _i{ \sigma _{i} \overset{s}{ \underset{(k)}{\rightarrow } } \sigma _{j} } \Big ]_m = \Big [ \sigma _{} \overset{\Sigma s}{ \underset{(k)}{\rightarrow } } \sigma _{j} \Big ]_m \end{aligned}$$

Spiking rules can only cause spike gains in a destination neuron if some other source neuron fires a spiking rule to the said destination, and so we finally have

$$\begin{aligned} G_{\mathcal{S}}^{(k)} \overset{?}{=}\Big [ \sigma _{} \overset{\Sigma s}{ \underset{(k)}{\rightarrow } } \sigma _{j} \Big ]_m = \Big [ g_{{\mathcal{S}},j}^{(k)} \Big ]_m \overset{\checkmark }{=}G_{\mathcal{S}}^{(k)} \end{aligned}$$

\(\square\)

Proof for Theorem 4

Since plasticity rules can only cause spike gains by creating synapses (because creating synapses would inherently send one spike to the destination neuron), we only need to check \(Sy _+^{(k)}\). Given the definition of \(Sy _+\), we have

$$\begin{aligned} G_{\mathcal{P}}^{(k)}&\overset{?}{=}\sum _{i=1}^{{r_{{\mathcal{P}}}}}{ Sy _{+,i}^{(k)}} = \Big [\sum _{i}{sy_{+,i,j}^{(k)}}\Big ]_m = \Big [\sum _{i}{ \sigma _{i} \overset{+}{ \underset{(k)}{\rightarrow } } \sigma _{j} }\Big ]_m\\&\quad = \Big [ \sigma _{} \overset{\Sigma +}{ \underset{(k)}{\rightarrow } } \sigma _{i} \Big ]_m = \Big [g_{{\mathcal{P}},i}^{(k)}\Big ]_m \overset{\checkmark }{=}G_{\mathcal{P}}^{(k)} \end{aligned}$$

\(\square\)

Proof for Theorem 5

Both spiking and plasticity rules can only cause spike loss through spike consumption upon rule firing. Thus

Since \(sr_{{\mathcal{R}},i,j}\) will only have a nonzero value if rule \({r_{{\mathcal{R}},i}}\) is in neuron \(\sigma _j\), we have

Spike losses will only ever be caused by spike consumption from rule firing in a given neuron. Thus, also given the definition of

Lines 9–11 would tick the timer to get \(Ti '^{(k)}\), by manually decreasing each element of the matrix by 1 unless the value is 0. \(\square\)

Proof for Theorem 6

By definition, \(Conf ^{(k)} = (C^{(k)}, Sy ^{(k)}, Ti '^{(k)})\). Lines 1 to 8 have been proven to correctly compute for \(C^{(k)}\) and \(Sy ^{(k)}\). The loop in Line 9 iterates over all plasticity rules, while the inner loop of Line 10 goes over the two plasticity operations creation (1) and deletion (2). Line 11 would then either count down the current unprimed timer (\(ti_{i,j}^{(k)} - 1\)), or keep it at zero (max). Thus, the loops correctly compute for \(Ti '^{(k)}\). Line 12 thus returns the correct configuration node via the constructor for \(Conf ^{(k)}\), being passed the correct arguments for \(C^{(k)}\), \(Sy ^{(k)}\), and \(Ti '^{(k)}\). \(\square\)

Proof for Theorem 7

The first three lines are just for initialization. The loop in Line 4 iterates over the configuration nodes in a breadth-first manner (seen by the use of dequeue and enqueue). Line 6 would cut off the computation graph once it reaches a given depth. The loop in Line 9 would go through the rule nodes, connecting them to configuration nodes first before heading to the loop in Line 13. This loop would go through the synapse nodes and connect them to the rule nodes, and then generates a new configuration node in Line 16. These two inner loops, from the rule nodes down to the immediate next configuration nodes, would generate these three levels in a depth-first manner (as seen with pop and push). Essentially, what happens is (1) given a configuration node, generate the subtree of these configuration nodes up to three levels in depth-first manner, (2) go through these configuration nodes in breadth-first manner. \(\square\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jimenez, Z.B., Cabarle, F.G.C., de la Cruz, R.T.A. et al. Matrix representation and simulation algorithm of spiking neural P systems with structural plasticity. J Membr Comput 1, 145–160 (2019). https://doi.org/10.1007/s41965-019-00020-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41965-019-00020-3

Keywords

Navigation