Advertisement

Efficient and Effective Feature Selection in the Presence of Feature Interaction and Noise

  • D. Partridge
  • W. Wang
  • P. Jones
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2013)

Abstract

This paper addresses the problem of feature subset selection for classification tasks. In particular, it focuses on the initial stages of complex realworld classification tasks when feature interaction is expected but illunderstood, and noise contaminating actual feature vectors must be expected to further complicate the classification problem. A neural-network based featureranking technique, the ‘clamping’ technique, is proposed as a robust and effective basis for feature selection that is more efficient than the established comparable techniques of sequential floating searches. The efficiency gain is that of an Order(n) algorithm over the Order(n 2) floating search techniques. These claims are supported by an empirical study of a complex classification task.

Keywords

Feature Selection Classification Accuracy Feature Subset Feature Interaction Feature Subset Selection 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Jain, A. and Zongker, D Feature selection: evaluation, application, and small sample performance, IEEE Trans. on Pattern Analysis and Machine Intelligence, 1997, 19(2), 153–158CrossRefGoogle Scholar
  2. 2.
    MacKay D. A practical bayesian framework for backpropagation networks. Neural Computation 1992; 4:448–472CrossRefGoogle Scholar
  3. 3.
    Wang, W., Jones, P., and Partridge, D. A comparative study of feature salience ranking techniques, Neural Computation, 2000 (in press).Google Scholar
  4. 4.
    Giacinto G, Roli F. Dynamic classifier selection. In: Kittler J, Roli F (eds) Multiple classifier systems. Springer, Berlin, 2000, pp. 177–189 (Lecture notes in computer science no. 1857)CrossRefGoogle Scholar
  5. 5.
    Jain A, Chandrasekaran B. Dimensionality and sample size considerations. In: Krishnaiah P R, Kanal L N (eds.) Pattern Recognition in Practice. North Holland, 1982, vol. 2, chap. 39, pp. 835–855Google Scholar
  6. 6.
    Theodoridis S, Koutroumbas K. Pattern Recognition, Academic Press, San Diego, 1999Google Scholar
  7. 7.
    Pudil P, Novovicova J, Kittler J. Floating search methods in feature selection, Pattern Recognition Letters 1994, 15: 1119–1125CrossRefGoogle Scholar
  8. 8.
    Mao J, Mohiuddin K, Jain A K. Parsimonious network design and feature selection through node pruning, Proc. 12th ICPR, Jerusalem, pp. 622–624, 1994Google Scholar
  9. 9.
    Rumelhart D E, Hinton G E, Williams R J. Learning internal representations by error propagation. In: Rumelhart D E, McClelland J L (eds) Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press, Cambridge, Mass., 1986, pp. 318–362Google Scholar
  10. 10.
    Wang, W., Jones, P., and Partridge, D. Assessing the impact of input features in a feedforward neural network. Neural Computing & Applications 2000: 9(2): 101–112.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • D. Partridge
    • 1
  • W. Wang
    • 1
  • P. Jones
    • 1
  1. 1.Department of Computer ScienceUniversity of ExeterUK

Personalised recommendations