Skip to main content
Log in

Driving environment assessment using fusion of in- and out-of-vehicle vision systems

  • Published:
International Journal of Automotive Technology Aims and scope Submit manuscript

Abstract

Because the overall driving environment consists of a complex combination of the traffic Environment, Vehicle, and Driver (EVD), Advanced Driver Assistance Systems (ADAS) must consider not only events from each component of the EVD but also the interactions between them. Although previous researchers focused on the fusion of the states from the EVD (EVD states), they estimated and fused the simple EVD states for a single function system such as the lane change intent analysis. To overcome the current limitations, first, this paper defines the EVD states as driver’s gazing region, time to lane crossing, and time to collision. These states are estimated by enhanced detection and tracking methods from in- and out-of-vehicle vision systems. Second, it proposes a long-term prediction method of the EVD states using a time delayed neural network to fuse these states and a fuzzy inference system to assess the driving situation. When tested with real driving data, our system reduced false environment assessments and provided accurate lane departure, vehicle collision, and visual inattention warning signals.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Apostoloff, N. and Zelinsky, A. (2004). Vision in and out of vehicles: Integrated driver and road scene monitoring. Int. J. Robot. Res. 23,4–5, 513–538.

    Article  Google Scholar 

  • Cheng, H., Zheng, N., Zhang, X., Qin, J. and Wetering, H. V. E. (2007). Interactive road situation analysis for driver assistance and safety warning systems: Framework and algorithms. IEEE Trans. Intell. Transp. 8,1, 157–167.

    Article  Google Scholar 

  • Choi, H. C. and Oh, S. Y. (2006). Real-time recognition of facial expression using active appearance model with second order minimization and neural network. Proc. IEEE Conf. Systems, Man, and Cybernetics, 1559–1564.

  • Chung, T., Yi, S. and Yi, K. (2007). Estimation of vehicle state and road bank angle for driver assistance systems. Int. J. Automotive Technology 8,1, 111–117.

    Google Scholar 

  • Davies, B. and Lienhart, R. (2006). Using CART to segment road images. Proc. SPIE Multimedia Content Analysis, Management, and Retrieval, 60730U, 1–12.

  • Fletcher, L., Loy, G., Barnes, N. and Zelinsky, A. (2005). A correlating driver gaze with the road scene for driver assistance systems. Robot. Auton. Syst., 52, 71–84.

    Article  Google Scholar 

  • He, Y., Wang, H. and Zhang, B. (2004). Color-based road detection in urban traffic scenes. IEEE Trans. Intell. Trans. 5,4, 309–318.

    Article  Google Scholar 

  • Kim, J. H., Kim, Y. W. and Sim, K. Y. (2007). Quantitative study on the fearfulness of human driver using vector quantization. Int. J. Automotive Technology 8,4, 505–512.

    Google Scholar 

  • Kim, S. Y., Kang, J. K., Oh, S. Y., Ryu, Y. W., Kim, K. S., Park, S. C. and Kim, J. W. (2008). An intelligent and integrated driver assistance system for increased safety and convenience based on all around sensing. J. Intell. Robot. Syst., 51, 261–287.

    Article  Google Scholar 

  • Lowe, D. G. (1999). Object recognition from local scaleinvariant features. Proc. Int. Conf. Comput. Vision 1150–1157.

  • Mamdani, E. H. and Assilian, S. (1975). An experiment in linguistic synthesis with a fuzzy logic controller. Int. J. Man-Machine Studies 7,1, 1–13.

    Article  MATH  Google Scholar 

  • Mandic, D. P. and Chambers, J. A. (2001). Recurrent Neural Networks for Prediction. John Wiley. Chichester. New York.

    Google Scholar 

  • Matthews, I. and Baker, S. (2004). Active appearance models revisited. Int. J. Comput. Vision 60,2, 135–164.

    Article  Google Scholar 

  • McCall, J. C., Wipf, D. P., Trivedi, M. M. and Rao, B. D. (2007). Lane change intent analysis using robust operators and sparse Bayesian. IEEE Trans. Intell. Transp. 8,3, 431–440.

    Article  Google Scholar 

  • Stiller, C., Färber, G. and Kammel, S. (2007). Cooperative cognitive automobiles. Proc. IEEE Intell. Veh. Symp., 215–220.

  • Viola, P. and Jones, M. J. (2004). Robust real-time face detection. Int. J. Comput. Vision 57,2, 137–154.

    Article  Google Scholar 

  • Wu, Y.-J., Lian, F.-L, Huang, C.-P. and Chang, T.-H. (2007). Image processing techniques for lane-related information extraction and multi-vehicle detection in intelligent highway vehicles. Int. J. Automotive Technology 8,4, 513–520.

    Google Scholar 

  • Xiao, J., Baker, S., Matthews, I. and Kanade, T. (2004). Real-time combined 2D+3D active appearance models. Proc. IEEE Conf. Comp. Vis. and Pattern Recog. 535–542.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Y. Kim.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kim, S.Y., Choi, H.C., Won, W.J. et al. Driving environment assessment using fusion of in- and out-of-vehicle vision systems. Int.J Automot. Technol. 10, 103–113 (2009). https://doi.org/10.1007/s12239-009-0013-5

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12239-009-0013-5

Key Words

Navigation