Advertisement

High-order Neural Networks and Networks with Composite Key Patterns

  • N. B. Karayiannis
  • A. N. Venetsanopoulos
Chapter
Part of the The Springer International Series in Engineering and Computer Science book series (SECS, volume 209)

Abstract

The development of neural networks of order higher than one was approached independently by a number of research groups. Rumelhart et al examined high-order neural networks by introducing new types of units, different than the conventional ones, known as sigmapi units (Rumelhart and McClelland, 1986). Motivated by the relationship between first-order neural networks and linear discriminant functions, other groups of researchers emphasized the connection between neural networks of order higher than one and nonlinear discriminant functions (Chen et al., 1986; Psaltis and Park, 1986; Psaltis et al., 1988). Giles et al. used high-order recurrent neural networks to infer regular grammars from positive and negative strings of training samples (Giles et al., 1990; Giles et al., 1991). This chapter focuses on the architecture, training, and properties of neural networks of order higher than one (Karayiannis, 1991a; Karayiannis and Venetsanopoulos, 1992e). It is shown that highorder neural networks of any order can be trained by any of the learning algorithms developed for first-order neural networks. This chapter also evaluates the efficiency of the outer-product rule when applied to the training of neural networks of order higher than one. The investigation of the architecture of high-order neural networks leads to the development of neural networks with composite key patterns that are the essential generalization of high-order neural networks. The training, performance, and potential applications of neural networks with composite key patterns are also investigated (Karayiannis, 1991a; Karayiannis and Venetsanopoulos, 1992e).

Keywords

Neural Network Artificial Neural Network Input Pattern Synaptic Weight Hide Unit 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media New York 1993

Authors and Affiliations

  • N. B. Karayiannis
    • 1
  • A. N. Venetsanopoulos
    • 2
  1. 1.University of HoustonUSA
  2. 2.University of TorontoCanada

Personalised recommendations