The polynomial neural network, or called polynomial network classifier (PNC), is a powerful nonlinear classifier that can separate classes of complicated distributions. A method that expands polynomial terms on principal subspace has yielded superior performance. In this paper, we aim to further improve the performance of the subspace-feature-based PNC. In the framework of discriminative feature extraction (DFE), we adjust the subspace parameters together with the network weights in supervised learning. Under the objective of minimum squared error, the parameters can be efficiently updated by stochastic gradient descent. In experiments on 13 datasets from the UCI Machine Learning Repository, we show that DFE can either improve the classification accuracy or reduce the network complexity. On seven datasets, the accuracy of PNC is competitive with support vector classifiers.


Linear Discriminant Analysis Network Weight Stochastic Gradient Descent Polynomial Term Fisher Linear Discriminant Analysis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Holmström, L., Koistinen, P., Laaksonen, J., Oja, E.: Neural and statistical classifiers—taxonomy and two case studies. IEEE Trans. Neural Networks 8(1), 5–17 (1997)CrossRefGoogle Scholar
  2. 2.
    Shürmann, J.: Pattern Classification: A Unified View of Statistical and Neural Approaches. Wiley Interscience, Chichester (1996)Google Scholar
  3. 3.
    Fukunaga, K.: Introduction to Statistical Pattern Recognition, 2nd edn. Academic Press, London (1990)MATHGoogle Scholar
  4. 4.
    Shin, Y., Ghosh, J.: Ridge polynomial networks. IEEE Trans. Neural Networks 6(3), 610–622 (1995)CrossRefGoogle Scholar
  5. 5.
    Kreßel, U., Schürmann, J.: Pattern classification techniques based on function approximation. In: Bunke, H., Wang, P.S.P. (eds.) Handbook of Character Recognition and Document Image Analysis, pp. 49–78. World Scientific, Singapore (1997)Google Scholar
  6. 6.
    Liu, C.-L., Nakashima, K., Sako, H., Fujisawa, H.: Handwritten digit recognition: benchmarking of state-of-the-art techniques. Pattern Recognition 36(10), 2271–2285 (2003)MATHCrossRefGoogle Scholar
  7. 7.
    Shin, Y., Ghosh, J.: The Pi-sigma network: an efficient higher-order neural network for pattern classification and function approximation. In: Proc. IJCNN 1991, Seattle, vol. 1, pp. 13–18 (1991)Google Scholar
  8. 8.
    Toh, K.-A., Tran, Q.-L., Srinivasan, D.: Benchmarking a reduced multivariate polynomial pattern classifier. IEEE Trans. Pattern Anal. Mach. Intell. 26(6), 740–755 (2004)CrossRefGoogle Scholar
  9. 9.
    Brunzell, H., Eriksson, J.: Feature reduction for classification of multidimensional data. Pattern Recognition 33(10), 1741–1748 (2000)CrossRefGoogle Scholar
  10. 10.
    Loog, M., Duin, R.P.W.: Linear dimensionality reduction via a heteroscedastic extension of LDA: the Chernoff criterion. IEEE Trans. Pattern Anal. Mach. Intell. 26(6), 732–739 (2004)CrossRefGoogle Scholar
  11. 11.
    Biem, A., Katagiri, S., Juang, B.-H.: Pattern recognition using discriminative feature extraction. IEEE Trans. Signal Processing 45(2), 500–504 (1997)CrossRefGoogle Scholar
  12. 12.
    Juang, B.-H., Katagiri, S.: Discriminative learning for minimum error classification. IEEE Trans. Signal Processing 40(12), 3043–3054 (1992)MATHCrossRefGoogle Scholar
  13. 13.
    Wang, X., Paliwal, K.K.: Feature extraction and dimensionality reduction algorithms and their applications in vowel recognition. Pattern Recognition 36(10), 2429–2439 (2003)MATHCrossRefGoogle Scholar
  14. 14.
    Yang, X., Pang, G., Yung, N.: Discriminative training approaches to fabric defect classification based on wavelet transform. Pattern Recognition 37(5), 889–899 (2004)CrossRefGoogle Scholar
  15. 15.
    Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22, 400–407 (1951)MATHCrossRefMathSciNetGoogle Scholar
  16. 16.
    UCI Machine Learning Repository,
  17. 17.
    Burges, C.J.C.: A tutorial on support vector machines for pattern recognition. Knowledge Discovery and Data Mining 2(2), 1–43 (1998)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Cheng-Lin Liu
    • 1
  1. 1.National Laboratory of Pattern Recognition (NLPR), Institute of AutomationChinese Academy of SciencesBeijingP.R. China

Personalised recommendations