Encyclopedia of Machine Learning and Data Mining

2017 Edition
| Editors: Claude Sammut, Geoffrey I. Webb

Topology of a Neural Network

  • Risto MiikkulainenEmail author
Reference work entry
DOI: https://doi.org/10.1007/978-1-4899-7687-1_843

Abstract

Topology of a neural network refers to the way the neurons are connected, and it is an important factor in how the network functions and learns. A common topology in unsupervised learning is a direct mapping of inputs to a collection of units that represents categories (e.g.,  Self-Organizing Maps). The most common topology in supervised learning is the fully connected, three-layer, feedforward network (see  Backpropagation and  Radial Basis Function Networks): All input values to the network are connected to all neurons in the hidden layer (hidden because they are not visible in the input or output), the outputs of the hidden neurons are connected to all neurons in the output layer, and the activations of the output neurons constitute the output of the whole network. Such networks are popular partly because they are known theoretically to be universal function approximators (with, e.g., a sigmoid or Gaussian nonlinearity in the hidden layer neurons), although networks with more layers may be easier to train in practice (e.g.,  Cascade-Correlation).

This is a preview of subscription content, log in to check access.

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  1. 1.Department of Computer ScienceThe University of Texas at AustinAustin,TXUSA