, Volume 3, Issue 4, pp 202-212

Efficient detection of spurious inputs for improving the robustness of MLP networks in practical applications

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access


The problem of the rejection of patterns not belonging to identified training classes is investigated with respect to Multilayer Perceptron Networks (MLP). The reason for the inherent unreliability of the standard MLP in this respect is explained, and some mechanisms for the enhancement of its rejection performance are considered. Two network configurations are presented as candidates for a more reliable structure, and are compared to the so-called ‘negative training’ approach. The first configuration is an MLP which uses a Gaussian as its activation function, and the second is an MLP with direct connections from the input to the output layer of the network. The networks are examined and evaluated both through the technique of network inversion, and through practical experiments in a pattern classification application. Finally, the model of Radial Basis Function (RBF) networks is also considered in this respect, and its performance is compared to that obtained with the other networks described.