Advertisement

Verification of Identification Accuracy of Eye-Gaze Data on Driving Video

  • Naoto Mukai
  • Kazuhiro Fujikake
  • Takahiro Tanaka
  • Hitoshi Kanamori
Conference paper
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 98)

Abstract

It is said that the most cause of traffic accidents is the lack of confirming the safety. Visual information from both eyes is one of the important factors for safe driving. In this paper, we collect eye-gaze data of drivers who watch a driving video, and try to develop a model of their eye movements to identify factors to enhance their safety. For the purpose of modeling, we adopted a recurrent neural network and Long Short-Term Memory (LSTM) to the collected eye-gaze data because the LSTM is able to deal with a time-series data such as the eye-gaze data. Moreover, we performed an experiment to evaluate the identification accuracy of drivers. The results indicated that the driver’s intention and habit can be approximated partially by the trained network, but it was insufficient to identify a personal driver for practical use.

Notes

Acknowledgment

This work is supported by the Research Project of Agent Mediated Driving Support of Nagoya University. We are truly thankful for the members of the group.

References

  1. 1.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  2. 2.
    Graves, A., Fernández, S., Schmidhuber, J.: Bidirectional LSTM networks for improved phoneme classification and recognition. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) Artificial Neural Networks: Formal Models and Their Applications - ICANN 2005, pp. 799–804. Springer, Heidelberg (2005)Google Scholar
  3. 3.
    Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 27, pp. 3104–3112. Curran Associates, Inc. (2014)Google Scholar
  4. 4.
    Mima, H., Ikeda, K., Shibata, T., Fukaya, N., Hitomi, K., Bando, T.: Estimation of driving state by modeling brake pressure signals. IEICE Tech. Rep. NLP 109(124), 49–53 (2009). (in Japanese)Google Scholar
  5. 5.
    Okada, S., Hitomi, K., Chandrasiri, N.P., Rho, Y., Nitta, K.: Analysis of driving behavior based on time-series data mining of vehicle sensor data. Proc. Forum Inf. Technol. 11(4), 387–390 (2012). (in Japansese)Google Scholar
  6. 6.
    Horiguchi, Y., Suzuki, T., Suzuki, T., Sawaragi, T., Nakanishi, H., Takimoto, T.: Analysis of train driver’s visual perceptual skills using Markov cluster algorithm. J. Jpn. Soc. Fuzzy Theor. Intell. Inform. 28(3), 598–607 (2016). (in Japanese)Google Scholar
  7. 7.
    Tanaka, T., Fuzikake, K., Yonekawa, T., Yamagishi, M., Inagami, M., Kinoshita, F., Aoki, H., Kanamori, H.: Analysis of relationship between forms of driving support agent and gaze behavior-study on driver agent for encouraging safety driving behavior of elderly drivers. In: Proceedings of Human-Agent Interaction Symposium (HAI), p. 2 (2017). (in Japanese)Google Scholar
  8. 8.
    Kamisaka, T., Noda, M., Mekada, Y., Deguchi, D., Ide, I., Murase, H.: Prediction of driving behavior using driver’s gaze information. IEICE Tech. Rep. Med. Imaging 111(49), 105–110 (2011). (in Japanese)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2019

Authors and Affiliations

  • Naoto Mukai
    • 1
  • Kazuhiro Fujikake
    • 2
  • Takahiro Tanaka
    • 2
  • Hitoshi Kanamori
    • 2
  1. 1.Department of Culture-Information StudiesSugiyama Jogakuen UniversityNagoyaJapan
  2. 2.Institutes of Innovation for Future SocietyNagoya UniversityNagoyaJapan

Personalised recommendations