Advertisement

Theory of Constraint Satisfaction Neural Networks

  • Massimo Buscema
Chapter

Abstract

Problems requiring the application of neural network technology are typically not those for which standard statistics can be applied, though within the world of statistics it is not uncommon to combine the benefits of neural networks with that of certain statistical methods such as multivariate analysis, to yield solutions that are more accurate than traditional methods. In this chapter, it is shown that one particular type of neural network is used to analyze problems in possession of sets of constraints that impose a series of conditions on solutions. Constraints that are both linear and nonlinear can be satisfied at one time using the constraint satisfaction (CS) artificial neural network (ANN). The ANN is a single layer model for which the weights matrix connections are symmetric and there is no association with local geography. Each node may have its own bias, and the analysis is performed using the auto-associative backpropagation network. It is shown that problems of resource optimization and data profiling are particularly applicable to this advancement.

References

  1. Buscema, M. (1995). Self-reflexive networks, theory, topology, applications. Quality & Quantity, 29(4), 339 403.CrossRefGoogle Scholar
  2. Buscema, M. (1998a). Constraint satisfaction neural networks. Substance Use & Misuse, 33(2), 389–408.CrossRefGoogle Scholar
  3. Buscema, M. (1998b). Back propagation neural networks. Substance Use & Misuse, 33(2), 233–270.CrossRefGoogle Scholar
  4. Hebb, D. O. (1949). The organization of behavior. New York: Wiley.Google Scholar
  5. Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, USA, 79, 2554–2558.CrossRefGoogle Scholar
  6. Hopfield, J. J. (1984). Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Sciences, USA, 81, 3088–3092.CrossRefGoogle Scholar
  7. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986a). Learning internal representations by error propagation. In D. E. Rumelhart & J. L. McClelland (Eds.), Parallel distributed processing (Foundations, explorations in the microstructure of cognition, Vol. 1). Cambridge, MA/London: MIT Press.Google Scholar
  8. Rumelhart, D. E., Smolensky, P., McClelland, J. L., & Hinton, G. E. (1986b). Schemata and sequential thought processes in PDP models. In J. L. McClelland & D. E. Rumelhart (Eds.), PDP, exploration in the microstructure of cognition (Vol. II, pp. 7–57). Cambridge, MA: MIT Press.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2013

Authors and Affiliations

  1. 1.Semeion Research Center of Sciences of CommuicationRomeItaly

Personalised recommendations