Finite size effects in neural networks

  • Laura Viana
  • Arnulfo Castellanos
  • A. C. C. Coolen
Neural Modeling (Biophysical and Structural Models)
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1606)


In this paper we give an overview of a recently developed theory [1, 2] which allows for calculating finite size corrections to the dynamical equations describing the dynamics of separable Neural Networks, away from saturation. According to this theory, finite size effects are described by a linear-noise Fokker Planck equation for the fluctuations (corresponding to an Ornstein-Uhlenbeck process), whose solution is characterized by the first two moments. The theory is applied to a particular problem in which detailed balance does not hold.


87.30 05.20 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Castellanos, A., Coolen, A.C.C., Viana, L.: Finite Size effects in separable recurrent Neural Networks, J. Phys. A: Math. Gen. 31 (1998) 6615–6634.CrossRefMATHGoogle Scholar
  2. 2.
    Castellanos, A., Ph.D. Thesis, CICESE-UNAM, México (1998).Google Scholar
  3. 3.
    Kohring G.A.: J. Phys. A: Math. Gen. 23 (1990) 2237.MathSciNetCrossRefGoogle Scholar
  4. 4.
    Coolen, A.A.C. and Sherrington D.: Mathematical Approaches to Neural Networks, ed. J.G. Taylor (Amsterdam, North Holland) p 293Google Scholar
  5. 5.
    Gardiner C W 1990 Handbook of Stochastic Methods (Berlin: Springer)MATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Laura Viana
    • 1
  • Arnulfo Castellanos
    • 2
  • A. C. C. Coolen
    • 3
  1. 1.Centro de Ciencias de la Materia CondensadaUNAMEnsenadaMexico
  2. 2.Dept. de FísicaUniversidad de SonoraHermosilloMexico
  3. 3.Dept. of Mathematics, King’s CollegeUniversity of London StrandLondonUK

Personalised recommendations