Skip to main content

Relating an Adaptive Network’s Structure to Its Emerging Behaviour for Hebbian Learning

  • Conference paper
  • First Online:
Theory and Practice of Natural Computing (TPNC 2018)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11324))

Included in the following conference series:

Abstract

In this paper it is analysed how emerging behaviour of an adaptive network can be related to characteristics of the adaptive network’s structure (which includes the adaptation structure). In particular, this is addressed for mental networks based on Hebbian learning. To this end relevant properties of the network and the adaptation that have been identified are discussed. As a result it has been found that in an achieved equilibrium state the value of a connection weight has a functional relation to the values of the connected states.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bi, G., Poo, M.: Synaptic modification by correlated activity: Hebb’s postulate revisited. Annu. Rev. Neurosci. 24, 139–166 (2001)

    Article  Google Scholar 

  2. Brauer, F., Nohel, J.A.: Qualitative Theory of Ordinary Differential Equations. Benjamin (1969)

    Google Scholar 

  3. Gerstner, W., Kistler, W.M.: Mathematical formulations of Hebbian learning. Biol. Cybern. 87, 404–415 (2002)

    Article  Google Scholar 

  4. Hebb, D.O.: The organization of behavior: a neuropsychological theory. (1949)

    Google Scholar 

  5. Hirsch, M.W.: The Dynamical Systems Approach to Differential Equations. Bulletin (New Series) of the American Mathematical Society 11, pp. 1–64 (1984)

    Google Scholar 

  6. Keysers, C., Perrett, D.I.: Demystifying social cognition: a Hebbian perspective. Trends Cogn. Sci. 8(11), 501–507 (2004)

    Article  Google Scholar 

  7. Keysers, C., Gazzola, V.: Hebbian learning and predictive mirror neurons for actions, sensations and emotions. Philos. Trans. R. Soc. Lond. B Biol. Sci. 369, 20130175 (2014)

    Article  Google Scholar 

  8. Kuriscak, E., Marsalek, P., Stroffek, J., Toth, P.G.: Biological context of Hebb learning in artificial neural networks, a review. Neurocomputing 152, 27–35 (2015)

    Article  Google Scholar 

  9. Lotka, A.J.: Elements of Physical Biology. Williams and Wilkins Co. (1924), 2nd ed. Dover Publications (1956)

    Google Scholar 

  10. Treur, Jan: Network-Oriented Modeling: Addressing Complexity of Cognitive, Affective and Social Interactions. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-45213-5

  11. Zenke, F., Gerstner, W., Ganguli, S.: The Temporal paradox of Hebbian learning and homeostatic plasticity. Neurobiology 43, 166–176 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jan Treur .

Editor information

Editors and Affiliations

Appendix Proofs of Propositions 1 and 2

Appendix Proofs of Propositions 1 and 2

Proof of Proposition 1.

(a) Consider μ < 1. Then by Definition 2 (b) the function W → c(V1, V2, W) - μW is monotonically decreasing in W, and since μ − 1 < 0 the function W → (μ − 1)W is strictly monotonically decreasing in W. Therefore the sum of them is also strictly monotonically decreasing in W. Now this sum is

$$ {\text{c}}(V_ 1, \, V_ 2,W) - \upmu W + \, (\upmu - 1)W = {\text{ c}}(V_ 1, \, V_ 2,W) - \, W $$

So, the function W → c(V1, V2, W) - W is strictly monotonically decreasing in W; by Definition 2(d) it holds c(V1, V2, 1) −1 = μ − 1 < 0, and by Definition 2(c) c(V1, V2, 0) – 0 ≥ 0. Therefore c(V1, V2, W) - W has exactly 1 point with c(V1, V2, W) − W = 0; so for each V1, V2 the equation c(V1, V2, W) – W = 0 has exactly one solution W, indicated by fμ(V1, V2); this provides a unique function fμ: [0, 1] x [0, 1] → [0, 1] implicitly defined by c(V1, V2, fμ(V1, V2)) = fμ(V1, V2). To prove that fμ is monotonically increasing, the following. Suppose V1 ≤ \( V^{\prime}_{1} \) and V2 ≤ \( V^{\prime}_{2} \), then by monotonicity of V1, V2 → c(V1, V2, W) in Definition 2(a) it holds

$$ 0 \, = {\text{ c}}(V_{ 1} , \, V_{ 2} ,{\text{ f}}_{\upmu} (V_{ 1} , \, V_{ 2} )) \, - {\text{ f}}_{\upmu} (V_{ 1} , \, V_{ 2} ) \le {\text{c}}(V^{\prime}_{1} , \, V^{\prime}_{ 2} ,{\text{ f}}_{\upmu} (V_{ 1} , \, V_{ 2} )) \, - {\text{ f}}_{\upmu} (V_{ 1} , \, V_{ 2} ) $$

So c(\( V^{\prime}_{1} \), \( V^{\prime}_{2} \), fμ(V1, V2)) − fμ(V1, V2) ≥ 0 whereas c(\( V^{\prime}_{1} \), \( V^{\prime}_{2} \), fμ(\( V^{\prime}_{1} \), \( V^{\prime}_{2} \))) − fμ(\( V^{\prime}_{1} \), \( V^{\prime}_{2} \)) = 0

and therefore

$$ {\text{c}}(V^{\prime}_{ 1} , \, V^{\prime}_{ 2} ,{\text{ f}}_{\upmu} (V^{\prime}_{ 1} , \, V^{\prime}_{ 2} )) \, - {\text{ f}}_{\upmu} (V^{\prime}_{ 1} , \, V^{\prime}_{ 2} ) \le {\text{c}}(V^{\prime}_{ 1} , \, V^{\prime}_{ 2} ,{\text{ f}}_{\upmu} (V_{ 1} , \, V_{ 2} )) \, - {\text{ f}}_{\upmu} (V_{ 1} , \, V_{ 2} ) $$

By strict decreasing monotonicity of W → c(V1, V2, W) - W it follows that fμ(V1, V2) > fμ(\( V^{\prime}_{ 1} \), \( V^{\prime}_{ 2} \)) cannot hold, so fμ(V1, V2) ≤ fμ(\( V^{\prime}_{ 1} \), \( V^{\prime}_{ 2} \)). This proves that fμ is monotonically increasing. From this monotonicity of fμ(..) it follows that fμ(1, 1) is the maximal value and fμ(0, 0) the minimal value. Now by Definition 1(d) it follows that fμ(0, 0) = c(0, 0, fμ(0, 0)) = μ fμ(0, 0) so fμ(0, 0) = μ fμ(0, 0), and as μ < 1 this implies fμ(0, 0) = 0.

(b) Consider μ = 1. When both V1, V2 are > 0, and c(V1, V2, W) = W, then W = 1, by Definition 1(d). This defines a function f1(V1, V2) of V1, V2 ∈ (0, 1], this time f1(V1, V2) = 1 for all V1, V2 > 0. When one of V1, V2 is 0 and μ = 1, then also by Definition 1(d) always c(V1, V2, W) = W, so in this case multiple solutions for W are possible: every W is a solution, and therefore no unique function value for f1(V1, V2) can be defined then.

Proof of Proposition 2

  1. (a)

    From cc(W) monotonically decreasing in W it follows that W→1/cc(W) is monotonically increasing on [0, 1). Moreover, the function W is strictly monotonically increasing; therefore for μ < 1 the function hμ(W) = (1 − μ)W/cc(W) is strictly monotonically increasing. Therefore hμ is injective and has an inverse function gμ on the range of hμ: a function gμ with gμ(hμ(W)) = W for all W ∈ [0, 1).

  2. (b)

    Suppose μ < 1 and c(V1, V2, W) = W, then from Definition 2(d) it follows that W = 1 is excluded, since from both c(V1, V2, W) = W and c(V1, V2, W) = μW it would follow μ = 1, which is not the case. Therefore W < 1, and the following hold

    $$ \eqalign{ & {\text{cs}}(V_ 1, \, V_ 2){\text{ cc}}\left( W \right) \, + \upmu W = \, W \cr & {\text{cs}}(V_ 1, \, V_ 2){\text{ cc}}\left( W \right) \, = \, ( 1- \upmu )W \cr & {\text{cs}}(V_ 1, \, V_ 2) \, = \, ( 1- \upmu )W/{\text{cc}}\left( W \right) \, = \, {{\text{h}}_\upmu }\left( W \right) \cr} $$

    So, hμ(W) = cs(V1, V2). Applying the inverse gμ yields W = gμ(hμ(W)) = gμ(cs(V1, V2)).

    Therefore in this case for the function fμ from Theorem 1 it holds:

    $$ {\text{f}}_{\upmu} (V_{ 1} , \, V_{ 2} ) \, = W = {\text{ g}}_{\upmu} ({\text{cs}}(V_{ 1} , \, V_{ 2} )) \, < { 1} $$

    so fμ is the composition of cs(..) followed by gμ.

  3. (c)

    For μ = 1 the equation c(V1, V2, W) = W becomes cs(V1, V2) cc(W) = 0 and this is equivalent to cs(V1, V2) = 0 or cc(W) = 0. From the definition of separation of variables it follows that this is equivalent to V1 = 0 or V2 = 0 or W = 1.

  4. (d)

    Suppose μ < 1 and c(V1, V2, W) = W, then because cs(..) and gμ are both monotonically increasing, the maximal W is gμ(cs(1, 1)), and the minimal W is gμ(cs(0, 0)). For μ = 1 these values are 1 always when V1, V2 > 0, and any value in [0, 1] (including 0) when one of V1, V2 is 0.

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Treur, J. (2018). Relating an Adaptive Network’s Structure to Its Emerging Behaviour for Hebbian Learning. In: Fagan, D., Martín-Vide, C., O'Neill, M., Vega-Rodríguez, M.A. (eds) Theory and Practice of Natural Computing. TPNC 2018. Lecture Notes in Computer Science(), vol 11324. Springer, Cham. https://doi.org/10.1007/978-3-030-04070-3_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-04070-3_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-04069-7

  • Online ISBN: 978-3-030-04070-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics