Theory of Constraint Satisfaction Neural Networks
Problems requiring the application of neural network technology are typically not those for which standard statistics can be applied, though within the world of statistics it is not uncommon to combine the benefits of neural networks with that of certain statistical methods such as multivariate analysis, to yield solutions that are more accurate than traditional methods. In this chapter, it is shown that one particular type of neural network is used to analyze problems in possession of sets of constraints that impose a series of conditions on solutions. Constraints that are both linear and nonlinear can be satisfied at one time using the constraint satisfaction (CS) artificial neural network (ANN). The ANN is a single layer model for which the weights matrix connections are symmetric and there is no association with local geography. Each node may have its own bias, and the analysis is performed using the auto-associative backpropagation network. It is shown that problems of resource optimization and data profiling are particularly applicable to this advancement.
- Hebb, D. O. (1949). The organization of behavior. New York: Wiley.Google Scholar
- Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986a). Learning internal representations by error propagation. In D. E. Rumelhart & J. L. McClelland (Eds.), Parallel distributed processing (Foundations, explorations in the microstructure of cognition, Vol. 1). Cambridge, MA/London: MIT Press.Google Scholar
- Rumelhart, D. E., Smolensky, P., McClelland, J. L., & Hinton, G. E. (1986b). Schemata and sequential thought processes in PDP models. In J. L. McClelland & D. E. Rumelhart (Eds.), PDP, exploration in the microstructure of cognition (Vol. II, pp. 7–57). Cambridge, MA: MIT Press.Google Scholar