CMOS current-mode implementation of spatiotemporal probabilistic neural networks for speech recognition

  • Chung-Yu Wu
  • Ron-Yi Liu


In this paper, a Spatiotemporal Probabilistic Neural Network (SPNN) is proposed for spatiotemporal pattern recognition. This new model is developed by applying the concept of Gaussian density function to the network structure of the SPR (Spatiotemporal Pattern Recognition). The main advantages of this model include faster training and recalling process for patterns. In addition, the overall architecture is also simple, modular, regular, locally connected, and suitable for VLSI implementation. One set of independent speaker isolated (Mandarin digit) speech database is used as an example to demonstrate the superiority of the neural networks for spatiotemporal pattern recognition. The testing result with a reduced error rate of 7% shows that the SPNN is very attractive and effective for practical applications. p ]The CMOS current-mode IC technology is used to implement the SPNN to achieve the objective of minimum classification error in a more direct manner. In this design, neural computation is performed in analog circuits while template information is stored in digital circuits. The prototyping speech recognition processor for the 12th LPC calculation is designed by 1.2μm CMOS technology. The HSPICE simulation results are also presented, which verifies the function of the designed neural system.


Probability Density Function Recognition Rate Speech Recognition Probabilistic Neural Network VLSI Implementation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    H. Sakoe, R. Isotani, K. Yoshida, and T. Watanabe, “Speaker independent word recognition using dynamic programming neural networks,”Proc. of Int. Symp. Acoustics, Speech, and Signal Processing, pp. 29–32, May 1989.Google Scholar
  2. 2.
    C.Y. Wu, R.Y. Liu, and Q.Z. Wu, “A probabilistic neural network for spatiotemporal pattern recognition,”Proc. of IEEE Int. Conf. Systems Engineering, pp. 32–35, Sept. 1992.Google Scholar
  3. 3.
    Special issue on current-mode analog signal processing circuits,IEE Proc., Part G, April 1990.Google Scholar
  4. 4.
    J.A. Freeman and D.M. Skapura,Neural Networks—Algorithm, Applications, and Programming Techniques, Addison-Wesley Publishing Company, 1991.Google Scholar
  5. 5.
    D.F. Specht, “Probabilistic neural networks and the polynomial adaline as complementary techniques for classification,”IEEE Trans. Neural Networks, Vol. 1, pp. 111–121, 1990.CrossRefGoogle Scholar
  6. 6.
    E. Parzen, “On estimation of a probability density function and mode,”Ann. Math. Stat., Vol. 33, pp. 1065–1076, 1962.MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    R.P. Lippmann, “An Introduction to computing with neural nets,”IEEE ASSP Magazine, Vol. 4, pp. 4–22, April 1987.CrossRefGoogle Scholar
  8. 8.
    R. Perfetti, “Winner-take-all circuit for neurocomputing applications,”Proc. Inst. Elec. Eng., Vol. 137, pp. 353–359, 1990.Google Scholar
  9. 9.
    J.C. Yen and S. Chang, “Improved winner-take-all neural network,”Electronics Letters, Vol. 28, pp. 662–664, 1992.CrossRefGoogle Scholar
  10. 10.
    P.W. Hollis and J.J. Panlos, “Artificial neural networks using MOS analog multiplier,”IEEE Journal of Solid-State Circuits, Vol. 25, pp. 849–855, 1990.CrossRefGoogle Scholar
  11. 11.
    K. Bult and H. Wallinga, “A class of analog CMOS circuits based on the square-law characteristic of an MOS transistor in saturation,”IEEE Journal of Solid-State Circuits, Vol. 22, pp. 357–365, 1987.CrossRefGoogle Scholar
  12. 12.
    T.S. Fiez, G. Liang, and D.J. Allstot, “Switched-current circuit design issues,”IEEE Journal of Solid-State Circuits, Vol. 26, pp. 192–202, 1991.CrossRefGoogle Scholar
  13. 13.
    C. Turchetti and M. Conti, “A new design of neural networks based on approximate identities for approximation and learning,”Proc. of Int. Symp. Circuits and Systems, pp. 359–362, May 1992.Google Scholar
  14. 14.
    B.J. Maundy and E.I. El-Masry, “Feedforward associative memory switched-capacitor artificial neural networks,”Analog Integrated Circuits and Signal Processing, Vol. 1, pp. 321–338, Kluwer Academic Publishers, 1991.CrossRefGoogle Scholar
  15. 15.
    B.J. Sheu, J. Choi, and C.F. Chang, “An analog neural network processor for self-organizing mapping,”IEEE Int. Solid State Circuits Conf., pp. 136–137, Feb. 1992.Google Scholar
  16. 16.
    M.E. Robinson, H. Yoneda, and E. Sanchez-Sinencio, “A modular CMOS design of a Hamming network,”IEEE Trans. Neural Networks, Vol. 3, pp. 444–456, 1992.CrossRefGoogle Scholar

Copyright information

© Kluwer Academic Publishers 1995

Authors and Affiliations

  • Chung-Yu Wu
    • 1
  • Ron-Yi Liu
    • 2
    • 3
  1. 1.Integrated Circuits and System Laboratory, Department of Electronics Engineering and Institute of Electronics, Engineering Building IVNational Chiao Tung UniversityHsinchu, TaiwanRepublic of China
  2. 2.Integrated Circuits and System Laboratory, Department of Electronics Engineering and Institute of of Electronics, Engineering Building IVNational Chiao Tung UniversityHsinchu, TaiwanRepublic of China
  3. 3.Telecommunication Laboratories, Ministry of CommunicationsChung-Li, TaiwanRepublic of China

Personalised recommendations