Soft Computing

, Volume 21, Issue 3, pp 597–609

Closed determination of the number of neurons in the hidden layer of a multi-layered perceptron network


DOI: 10.1007/s00500-016-2416-3

Cite this article as:
Kuri-Morales, A. Soft Comput (2017) 21: 597. doi:10.1007/s00500-016-2416-3


Multi-layered perceptron networks (MLP) have been proven to be universal approximators. However, to take advantage of this theoretical result, we must determine the smallest number of units in the hidden layer. Two basic theoretically established requirements are that an adequate activation function be selected and a proper training algorithm be applied. We must also guarantee that (a) The training data compile with the demands of the universal approximation theorem (UAT) and (b) The amount of information present in the training data be determined. We discuss how to preprocess the data in order to meet such demands. Once this is done, a closed formula to determine H may be applied. Knowing H implies that any unknown function associated to the training data may, in practice, be arbitrarily approximated by a MLP. We take advantage of previous work where a complexity regularization approach tried to minimize the RMS training error. In that work, an algebraic expression of H is attempted by sequential trial-and-error. In contrast, here we find a closed formula \(H=f(m_{O}, N)\) where \(m_{O}\) is the number of units in the input layer and N is the effective size of the training data. The algebraic expression we derive stems from statistically determined lower bounds of H in a range of interest of the \((m_{O}, N)\) pairs. The resulting sequence of 4250 triples \((H, m_{O}, N)\) is replaced by a single 12-term bivariate polynomial. To determine its 12 coefficients and the degrees of the 12 associated terms, a genetic algorithm was applied. The validity of the resulting formula is tested by determining the architecture of twelve MLPs for as many problems and verifying that the RMS error is minimal when using it to determine H.


Neural networks Perceptrons Information theory Genetic algorithms 

Funding information

Funder NameGrant NumberFunding Note
Asociación Mexicana de Cultura, A.C.
  • 500221

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  1. 1.Instituto Tecnológico Autónomo de MéxicoMexicoMexico

Personalised recommendations