A Proposal of Mouth Shapes Sequence Code for Japanese Pronunciation

  • Tsuyoshi Miyazaki
  • Toyoshiro Nakashima
  • Naohiro Ishii
Part of the Studies in Computational Intelligence book series (SCI, volume 368)

Abstract

In this paper, we examine a method in which distinctive mouth shapes are processed using a computer.When lip-reading skill holders do lip-reading, they stare at the changes in mouth shape of a speaker. In recent years, some researches into lip-reading using information technology has been pursued. There are some researches based on the changes in mouth shape. The researchers analyze all data of the mouth shapes during an utterance, whereas lip-reading skill holders look at distinctive mouth shapes. We found that there was a high possibility for lip-reading by using the distinctive mouth shapes. To build the technique into a lip-reading system, we propose an expression method of the distinctive mouth shapes which can be processed using a computer. In this way, we acquire knowledge about the relation between Japanese phones and mouth shapes. We also propose a method to express order of the distinctive mouth shapes which are formed by a speaker.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Kenji, M., Pentland, A.: Automatic Lip-reading by Optical-flow Analysis. The Transactions of the Institute of Electronics, Information and Communication Engineers J73-D-II(6), 796–803 (1990) (in Japanese)Google Scholar
  2. 2.
    Takashi, O., Teruhiko, O.: Automatic Lipreading of Station Names Using Optical Flow and HMM. Technical Report of IEICE PRMU 102(471), 25–30 (2002) (in Japanese)Google Scholar
  3. 3.
    Mang, L., Issei, Y., Yoshihiro, K., Hidemitsu, O.: Automatic Lipreading by Subspace Method. Technical Report of IEICE PRMU 97(251), 9–14 (1997) (in Japanese)Google Scholar
  4. 4.
    Takeshi, S., Ryosuke, K.: Lip Reading Based on Trajectory Feature. The IEICE Transactions on Information and Systems (Japanese edition) J90-D(4), 1105–1114 (2007) (in Japanese)Google Scholar
  5. 5.
    Kimiyasu, K., Keiichi, U.: An Utered Word Recognition Using Lip Image Information. The Transactions of the Institute of Electronics, Information and Communication Engineers J76-D-II(3), 812–814 (1993) (in Japanese)Google Scholar
  6. 6.
    Akihiro, O., Yoshitaka, H., Kenji, O., Toshihiko, M.: Speech Recognition Based on Integration of Visual and Auditory Information. Transactions of Information Processing Society of Japan 39(12), 3232–3241 (1998) (in Japanese)Google Scholar
  7. 7.
    Yasuyuki, N., Moritoshi, A.: Lipreading Method Using Color Extraction Method and Eigenspace Technique. The Transactions of the Institute of Electronics, Information and Communication Engineers J85-D-II(12), 1813–1822 (2002) (in Japanese)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Tsuyoshi Miyazaki
    • 1
  • Toyoshiro Nakashima
    • 2
  • Naohiro Ishii
    • 3
  1. 1.Kanagawa Institute of TechnologyAtsugiJapan
  2. 2.Sugiyama Jogakuen UniversityNagoyaJapan
  3. 3.Aichi Institute of TechnologyToyotaJapan

Personalised recommendations