Skip to main content

Content-Based Driving Scene Retrieval Using Driving Behavior and Environmental Driving Signals

  • Chapter
  • First Online:
Smart Mobile In-Vehicle Systems

Abstract

With the increasing presence of drive recorders and advances in their technology, a large variety of driving data, including video images and sensor signals such as vehicle velocity and acceleration, can be continuously recorded and stored. Although these advances may contribute to traffic safety, the increasing amount of driving data complicates retrieval of desired information from large databases. One of our previous research projects focused on a browsing and retrieval system for driving scenes using driving behavior signals. In order to further its development, in this chapter we propose two driving scene retrieval systems. The first system also measures similarities between driving behavior signals. Experimental results show that a retrieval accuracy of more than 95 % is achieved for driving scenes involving stops, starts, and right and left turns. However, the accuracy is relatively lower for driving scenes of right and left lane changes and going up and down hills. The second system measures similarities between environmental driving signals, focusing on surrounding vehicles and driving road configuration. A subjective score from 1 to 5 is used to indicate retrieval performance, where a score of 1 means that the retrieved scene is completely dissimilar from the query scene and a score of 5 means that they are exactly the same. In a driving scene retrieval experiment, an average score of more than 3.21 is achieved for queries of driving scenes categorized as straight, curve, lane change, and traffic jam, when data from both road configuration and surroundings are employed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. S.Y. Cheng, S. Park, M.M. Trivedi, Multispectral and multi-perspective video arrays for driver body tracking and activity analysis. Comput. Vis. Image Understand. 106, 245–247 (2007)

    Article  Google Scholar 

  2. D. Mitrovic, Reliable method for driving events recognition. IEEE Trans. Intell. Transp. Syst. 6(2), 198–205 (2005)

    Article  Google Scholar 

  3. N. Oliver, A. Pentland, Graphical models for driver behavior recognition in a SmartCar, in Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 7–12, 2000

    Google Scholar 

  4. M. Naito, A. Ozaki, C. Miyajima, N. Kitaoka, R. Terashima, K. Takeda, A browsing and retrieval system for driving data, in Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 1159–1165, June 2010

    Google Scholar 

  5. J.Z. Wang, J. Li, G. Wiederhold, SIMPLIcity: semantics-sensitive integrated matching for picture libraries. IEEE Trans. Pattern Anal. Mach. Intell. 23(9), 947–963 (2001)

    Article  Google Scholar 

  6. N. Kaempchen, Feature-level fusion of laser scanner and video data scanner and video for advanced driver assistance systems, Ph.D. dissertation, University of Ulm, Germany, 2007

    Google Scholar 

  7. H. Rakha, B. Crowther, Comparison of Greenshields, pipes, and Van aerde car-following and traffic stream models. J. Transp. Res. Board 1802, 248–262 (2007)

    Article  Google Scholar 

  8. K. Takeda, J. Hansen, P. Boyraz, L. Malta, C. Miyajima, H. Abut, International large-scale vehicle corpora for research on driver behavior on the road. IEEE Trans. Intell. Transp. Syst. 12, 1609–1623 (2011)

    Article  Google Scholar 

Download references

Acknowledgement

This work was partially supported by the Strategic Information and Communications R & D Promotion Programme (SCOPE) of the Ministry of Internal Affairs and Communications of Japan under No. 082006002, by Grant-in-Aid for Scientific Research (C) from the Japan Society for the Promotion of Science (JSPS) under No. 24500200, and by the Core Research of Evolutional Science and Technology (CREST) of the Japan Science and Technology Agency (JST).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yiyang Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media New York

About this chapter

Cite this chapter

Li, Y., Nakagawa, R., Miyajima, C., Kitaoka, N., Takeda, K. (2014). Content-Based Driving Scene Retrieval Using Driving Behavior and Environmental Driving Signals. In: Schmidt, G., Abut, H., Takeda, K., Hansen, J. (eds) Smart Mobile In-Vehicle Systems. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-9120-0_14

Download citation

  • DOI: https://doi.org/10.1007/978-1-4614-9120-0_14

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4614-9119-4

  • Online ISBN: 978-1-4614-9120-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics