Advertisement

Greedy Randomized Adaptive Search Procedures

Chapter
Part of the Operations Research/Computer Science Interfaces Series book series (ORCS, volume 36)

Abstract

This chapter addresses the problem of designing and training artificial neural networks with discrete activation functions for the classification of patterns in two categories. When the set of patterns is not linearly separable, this problem consists of determining the amount of neurons in the hidden layer that are needed to correctly classify the patterns. This problem has been reported to be an NP-hard problem. In this case, a GRASP is proposed, which exploits the particular structure of the model to determine the neurons of the hidden layer of the network as well as their corresponding weights. This procedure adds neurons, one at a time, until there are no misclassified patterns. Then it is possible to apply the condition of linear separability in order to obtain the corresponding weights of the neuron of the output layer. As a result, a trained network is obtained, which correctly classifies all the patterns in the training set. The procedure is tested with ten benchmarks datasets and results show that it performs well in a reasonable amount of time.

Key words

Classification problem neural networks GRASP constructive procedure 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Angel-Bello F., Martínez J. L., and Alvarez A., 2003, A linear programming-based heuristic method for designing and training neural networks with discrete activation function for pattern classification, in: Proceedings of the 8 th International Conference on Industrial Engineering-Theory, Applications and Practice, Las Vegas, pp. 748–753.Google Scholar
  2. Cybenko, G. 1989. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems 2:303–314.zbMATHMathSciNetGoogle Scholar
  3. Feo, T. A., and Resende M. G. C, 1995, Greedy randomized adaptive search procedures. Journal of Global Optimization 6:109–133.zbMATHMathSciNetCrossRefGoogle Scholar
  4. Festa P., and Resende M., 2002, GRASP: An annotated bibliography, in: Essays and Surveys in Metaheuristics, C. C. Ribeiro and P. Hansen, eds., Kluwer Academic Publisher, Boston, pp. 325–367.Google Scholar
  5. Flectcher J., and Obradovic Z., 1994, Constructive learning a near-minimal neural network architecture, Neural networks 1:204–208.Google Scholar
  6. Gori, M., and Tesi, A., 1992, On the problem of local minima in backpropagation, IEEE Transactions on Pattern Analysis and Machine Intelligence 14:76–86.CrossRefGoogle Scholar
  7. Hornik, K., Stinchcombe, M., and White, H., 1989, Multilayer feedforward network are universal approximators, Neural Networks 2:359–366.CrossRefGoogle Scholar
  8. Murphy P., and Aha D., 1994, Repository of machine learning databases; http://www.ics.uci.edu/AI/ML/MLDBRepository.htmlGoogle Scholar
  9. Parekh, R., Yang, J., and Honavar, V., 2000, Constructive neural network learning algorithms for pattern classification, IEEE Transactions on Neural Networks 2:436–451.CrossRefGoogle Scholar
  10. Roychowdhury, V., Siu, K.-Y., and Kailath, T., 1995, Classification of linearly nonseparable patterns by linear threshold elements, IEEE Transactions on Neural Networks 2:318–331.CrossRefGoogle Scholar
  11. Yu, X., 1992, Can backpropagation error surface not have local minima?, IEEE Transactions on Neural Networks 3:1019–1021.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2006

Authors and Affiliations

  1. 1.Institute Tecnológico de Estudios Superiores de MonterreyNuevo LeónMéxico
  2. 2.Universidad Autónoma de Nuevo LeónNuevo LeónMéxico

Personalised recommendations