Skip to main content

Part of the book series: Advances in Computer Vision and Pattern Recognition ((ACVPR))

  • 4539 Accesses

Abstract

At first sight the handling of probability values in real computing systems seems to be a trivial problem as the range of values of these quantities is limited to the interval [0.0…1.0]. Nevertheless, problems arise especially in longer computational procedures as extremely small values lying close to zero need to be represented and manipulated. Therefore, it is of fundamental importance for the practical use of Markov models to be able to effectively counteract the phenomenon of de facto vanishing probabilities. In order to do so, the most important mechanism is an improved numerical representation of these quantities. In addition, probabilities may be limited to suitable lower bounds algorithmically, if necessary.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 89.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    On virtually all modern computer architectures, floating-point numbers are represented in formats which were standardized in the framework of the ANSI/IEEE standard 854 [130]. When considering the absolute value only, single precision numbers can be represented in the range of approximately 3.4⋅1038 to 1.4⋅10−45 and such with double precision in the range of 1.8⋅10308 to 4.9⋅10−324.

  2. 2.

    Feature vectors are extracted from the speech signal with a spacing of 10 ms in virtually all current speech recognition systems.

  3. 3.

    In log-domain representations this will correspond to picking the smaller of \(\tilde{p}_{1}\) and \(\tilde{p}_{2}\).

  4. 4.

    The precision of the “logarithmic summation” can be improved further if a function for directly computing ln(1+x) also for small x is used, e.g., the function log1p() available in the standard C-library.

  5. 5.

    This problem does not arise in the usual decoding of semi-continuous HMMs. When replacing density values of individual mixtures by class posterior probabilities (cf. Sect. 7.3), the resulting quantities can safely be manipulated in the linear domain.

  6. 6.

    When applying Eq. (7.1) in a naive way, even an error in the floating-point computation would result.

References

  1. Asadi, A., Schwartz, R., Makhoul, J.: Automatic detection of new words in a large vocabulary continuous speech recognition system. In: Proc. Int. Conf. on Acoustics, Speech, and Signal Processing, Albuquerque, pp. 125–128 (1990)

    Chapter  Google Scholar 

  2. Durbin, R., Eddy, S.R., Krogh, A., Mitchison, G.: Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids. Cambridge University Press, Cambridge (1998)

    Book  Google Scholar 

  3. Hayamizu, S., Itou, K., Takaka, K.: Detection of unknown words in large vocabulary speech recognition. In: Proc. European Conf. on Speech Communication and Technology, Berlin, pp. 2113–2116 (1993)

    Google Scholar 

  4. Huang, X.D., Ariki, Y., Jack, M.A.: Hidden Markov Models for Speech Recognition. Information Technology Series, vol. 7. Edinburgh University Press, Edinburgh (1990)

    Google Scholar 

  5. IEEE Computer Society. Technical Committee on Microprocessors and Microcomputers, IEEE Standards Board: IEEE Standard for Radix-Independent Floating-Point Arithmetic. ANSI/IEEE Std 854-1987, p. 16. IEEE Computer Society Press, 1109 Spring Street, Suite 300, Silver Spring, MD 20910, USA (1987)

    Google Scholar 

  6. Jusek, A., Fink, G.A., Kummert, F., Sagerer, G.: Automatically generated models for unknown words. In: Proc. Australian International Conference on Speech Science and Technology, Adelaide, pp. 301–306 (1996)

    Google Scholar 

  7. Kingsbury, N.G., Rayner, P.J.W.: Digital filtering using logarithmic arithmetic. Electron. Lett. 7(2), 56–58 (1971)

    Article  Google Scholar 

  8. Krogh, A.: An introduction to Hidden Markov Models for biological sequences. In: Salzberg, S.L., Searls, D.B., Kasif, S. (eds.) Computational Methods in Molecular Biology, pp. 45–63. Elsevier, New York (1998)

    Chapter  Google Scholar 

  9. Lee, K.-F.: Automatic Speech Recognition: The Development of the SPHINX System. Kluwer Academic, Boston (1989)

    Book  Google Scholar 

  10. Rabiner, L.R.: A tutorial on Hidden Markov Models and selected applications in speech recognition. Proc. IEEE 77(2), 257–286 (1989)

    Article  Google Scholar 

  11. Young, S.R.: Detection of misrecognitions and out-of-vocabulary words in spontaneous speech. In: McKevitt, P. (ed.) AAAI-94 Workshop Program: Integration of Natural Language and Speech Processing, Seattle, Washington, pp. 31–36 (1994)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag London

About this chapter

Cite this chapter

Fink, G.A. (2014). Computations with Probabilities. In: Markov Models for Pattern Recognition. Advances in Computer Vision and Pattern Recognition. Springer, London. https://doi.org/10.1007/978-1-4471-6308-4_7

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-6308-4_7

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-6307-7

  • Online ISBN: 978-1-4471-6308-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics