Advertisement

A Priori and A Posteriori Machine Learning and Nonlinear Artificial Neural Networks

  • Jan Zelinka
  • Jan Romportl
  • Luděk Müller
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6231)

Abstract

The main idea of a priori machine learning is to apply a machine learning method on a machine learning problem itself. We call it “a priori” because the processed data set does not originate from any measurement or other observation. Machine learning which deals with any observation is called “posterior”. The paper describes how posterior machine learning can be modified by a priori machine learning. A priori and posterior machine learning algorithms are proposed for artificial neural network training and are tested in the task of audio-visual phoneme classification.

Keywords

Speech Recognition Criterial Function Fusion Method Automatic Speech Recognition Neural Network Training 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Heskes, T.: Empirical Bayes for Learning to Learn. In: Proceedings of ICML, pp. 367–374. Morgan Kaufmann, San Francisco (2000)Google Scholar
  2. 2.
    Vilalta, R., Drissi, Y.: A Perspective View and Survey of Meta-Learning. Artificial Intelligence Review 18, 77–95 (2002)CrossRefGoogle Scholar
  3. 3.
    Kumar, R.: A Neural Network Approach to Rotorcraft Parameters Estimation (2007)Google Scholar
  4. 4.
    Hering, P., Šimandl, M.: Gaussian Sum Approach with Optimal Experiment Design for Neural Network, Honolulu, pp. 425–430. ACTA Press (2007)Google Scholar
  5. 5.
    Císař, P., Železný, M., Krňoul, Z., Kanis, J., Zelinka, J., Müller, L.: Design and Recording of Czech Speech Corpus for Audio-Visual Continuous Speech Recognition. In: AVSP 2005, Vancouver Island, pp. 1–4 (2005)Google Scholar
  6. 6.
    Potamianos, G., Neti, C., Iyengar, G., Helmuth, E.: Large-Vocabulary Audiovisual Speech Recognition: A Summary. In: Proc. Works. Signal Processing, Johns Hopkins Summer 2000 Workshop, pp. 619–624 (2001)Google Scholar
  7. 7.
    Grézl, F.: TRAP-Based Probabilistic Features for Automatic Speech Recognition. Ph.D. thesis, MUNI (2007)Google Scholar
  8. 8.
    Burnett, R.: Learning to Learn in a Virtual World, Milan, Italy. AERA (1999)Google Scholar
  9. 9.
    Brazdil, P., Giraud-Carrier, C., Soares, C., Vilalta, R.: Metalearning: Applications to Data Mining. Springer Publishing Company, Heidelberg (2008) (incorporated)Google Scholar
  10. 10.
    Schmidhuber, J.: Steps Towards ‘Self-Referential’ Neural Learning: A Thought Experiment. Technical Report CU-CS-627-92, Department of Computer Science and Institute of Cognitive Science, University of Colorado, Boulder, Boulder, CO (1992)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Jan Zelinka
    • 1
  • Jan Romportl
    • 1
  • Luděk Müller
    • 1
  1. 1.The Department of CyberneticsUniversity of West Bohemia, SpeechTech s.r.o.Czech Republic

Personalised recommendations