Advertisement

Principal Feature Networks for Pattern Recognition

  • Qi (Peter) Li
Chapter
Part of the Signals and Communication Technology book series (SCT)

Abstract

Pattern recognition is one of the fundamental technologies in speaker authentication. Understanding the concept of pattern recognition is important in developing speaker authentication algorithms and applications. There are already many books and tutorial papers on pattern recognition and neural network. Instead of repeating a similar introduction of the fundamental pattern recognition and neural networks techniques, we introduce a different approach for neural network training and construction that was developed by the author and Tufts and named the principal feature network (PFN), which is an analytical method to construct a classifier or recognizer. Through this chapter, readers will gain a better understanding of pattern recognition methods and neural networks and their relation to multivariate statistical analysis.

Keywords

Weight Vector Linear Discriminant Analysis Hide Node Radial Basis Function Network Training Vector 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bishop, C.: Neural networks for pattern recognition. Oxford Univ. Press, NY (1995)Google Scholar
  2. 2.
    Breiman, L., Friedman, J. H., Olshen, R. A., Stone, C. J.: Classification and Regression Trees. Wadsworth International Group, Belmont (1984)zbMATHGoogle Scholar
  3. 3.
    Chen, S., Cowan, C. F. N., Grant, P. M.: “Orthogonal least squares learning algorithm for radial basis function networks.” IEEE Transactions on Neural Networks, vol. 2, March 1991Google Scholar
  4. 4.
    Demuth, H., Beale, M.: Neural network toolbox user’s guide. The MathWorks Inc., Natick (1994)Google Scholar
  5. 5.
    Duda, R. O., Hart, P. E.: Pattern Classification and Scene Analysis. Wiley, New York (1973)zbMATHGoogle Scholar
  6. 6.
    Duda, R. O., Hart, P. E., Stork, D. G.: Pattern Classification, Second Edition. Wiley, New York (2001)Google Scholar
  7. 7.
    Duhaime, R. J.: The Use of Color Infrared Digital Orthophotography to Map Vegetation on Block Island, Rhode Island. Master’s thesis, University of Rhode Island, Kingston, RI, May 1994Google Scholar
  8. 8.
    Fisher, R. A.: “The statistical utilization of multiple measurements”. Annals of Eugenics 8, 376–386 (1938)CrossRefGoogle Scholar
  9. 9.
    Gallant, S. I.: Neural Network Learning and Expert Systems. The MIT Press, Cambridge (1993)zbMATHGoogle Scholar
  10. 10.
    Johnson, R. A., Wichern, D. W.: Applied Multivariate Statistical Analysis. Prentice Hall, New Jersey (1988)zbMATHGoogle Scholar
  11. 11.
    Li, Q.: Classification using principal features with application to speaker verification. PhD thesis, University of Rhode Island, Kingston, RI, October 1995Google Scholar
  12. 12.
    Li, Q., Tufts, D. W.: “Improving discriminant neural network (DNN) design by the use of principal component analysis,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (Detroit, MI), pp. 3375–3379, May 1995Google Scholar
  13. 13.
    Li, Q., Tufts, D. W.: “Principal feature classification”. IEEE Trans. Neural Networks 8, 155–160 (1997)CrossRefGoogle Scholar
  14. 14.
    Li, Q., Tufts, D. W.: “Synthesizing neural networks by sequential addition of hidden nodes,” in Proceedings of the IEEE International Conference on Neural Networks (Orlando, FL), pp. 708–713, June 1994Google Scholar
  15. 15.
    Li, Q., Tufts, D. W., Duhaime, R., August, P.: “Fast training algorithms for large data sets with application to classification of multispectral images,” in Proceedings of the IEEE 28th Asilomar Conference (Pacific Grove), October 1994Google Scholar
  16. 16.
    Reed, R.: “Pruning algorithms—a survey”. IEEE Transactions on Neural Networks 4, 740–747 (1993)CrossRefGoogle Scholar
  17. 17.
    Sankar, A., Mammone, R. J.: “Growing and pruning neural tree networks”. IEEE Transactions on Computers C-42, 291–299 (1993)CrossRefGoogle Scholar
  18. 18.
    Scharf, L. L.: Statistical Signal Processing. Addison-Wesley, Reading, MA (1990)Google Scholar
  19. 19.
    Streit, R. L., Luginbuhl, T. E.: “Maximum likelihood training of probabilistic neural networks,” IEEE Transactions on Neural Networks, 5, September 1994Google Scholar
  20. 20.
    Tufts, D. W., Li, Q.: “Principal feature classification,” in Neural Networks for Signal Processing V, Proceedings of the 1995 IEEE Workshop (Cambridge, MA), August 1995Google Scholar
  21. 21.
    Werbos, P. J.: The roots of backpropagation: from ordered derivatives to neural networks and political forecasting. Wiley, New York (1994)Google Scholar
  22. 22.
    Zurada, J. M.: Introduction to Artificial Neural Systems. West Publishing Company, New York (1992)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg  2012

Authors and Affiliations

  1. 1.Li Creative Technologies (LcT), IncFlorham ParkUSA

Personalised recommendations