Abstract
Because the overall driving environment consists of a complex combination of the traffic Environment, Vehicle, and Driver (EVD), Advanced Driver Assistance Systems (ADAS) must consider not only events from each component of the EVD but also the interactions between them. Although previous researchers focused on the fusion of the states from the EVD (EVD states), they estimated and fused the simple EVD states for a single function system such as the lane change intent analysis. To overcome the current limitations, first, this paper defines the EVD states as driver’s gazing region, time to lane crossing, and time to collision. These states are estimated by enhanced detection and tracking methods from in- and out-of-vehicle vision systems. Second, it proposes a long-term prediction method of the EVD states using a time delayed neural network to fuse these states and a fuzzy inference system to assess the driving situation. When tested with real driving data, our system reduced false environment assessments and provided accurate lane departure, vehicle collision, and visual inattention warning signals.
Similar content being viewed by others
References
Apostoloff, N. and Zelinsky, A. (2004). Vision in and out of vehicles: Integrated driver and road scene monitoring. Int. J. Robot. Res. 23,4–5, 513–538.
Cheng, H., Zheng, N., Zhang, X., Qin, J. and Wetering, H. V. E. (2007). Interactive road situation analysis for driver assistance and safety warning systems: Framework and algorithms. IEEE Trans. Intell. Transp. 8,1, 157–167.
Choi, H. C. and Oh, S. Y. (2006). Real-time recognition of facial expression using active appearance model with second order minimization and neural network. Proc. IEEE Conf. Systems, Man, and Cybernetics, 1559–1564.
Chung, T., Yi, S. and Yi, K. (2007). Estimation of vehicle state and road bank angle for driver assistance systems. Int. J. Automotive Technology 8,1, 111–117.
Davies, B. and Lienhart, R. (2006). Using CART to segment road images. Proc. SPIE Multimedia Content Analysis, Management, and Retrieval, 60730U, 1–12.
Fletcher, L., Loy, G., Barnes, N. and Zelinsky, A. (2005). A correlating driver gaze with the road scene for driver assistance systems. Robot. Auton. Syst., 52, 71–84.
He, Y., Wang, H. and Zhang, B. (2004). Color-based road detection in urban traffic scenes. IEEE Trans. Intell. Trans. 5,4, 309–318.
Kim, J. H., Kim, Y. W. and Sim, K. Y. (2007). Quantitative study on the fearfulness of human driver using vector quantization. Int. J. Automotive Technology 8,4, 505–512.
Kim, S. Y., Kang, J. K., Oh, S. Y., Ryu, Y. W., Kim, K. S., Park, S. C. and Kim, J. W. (2008). An intelligent and integrated driver assistance system for increased safety and convenience based on all around sensing. J. Intell. Robot. Syst., 51, 261–287.
Lowe, D. G. (1999). Object recognition from local scaleinvariant features. Proc. Int. Conf. Comput. Vision 1150–1157.
Mamdani, E. H. and Assilian, S. (1975). An experiment in linguistic synthesis with a fuzzy logic controller. Int. J. Man-Machine Studies 7,1, 1–13.
Mandic, D. P. and Chambers, J. A. (2001). Recurrent Neural Networks for Prediction. John Wiley. Chichester. New York.
Matthews, I. and Baker, S. (2004). Active appearance models revisited. Int. J. Comput. Vision 60,2, 135–164.
McCall, J. C., Wipf, D. P., Trivedi, M. M. and Rao, B. D. (2007). Lane change intent analysis using robust operators and sparse Bayesian. IEEE Trans. Intell. Transp. 8,3, 431–440.
Stiller, C., Färber, G. and Kammel, S. (2007). Cooperative cognitive automobiles. Proc. IEEE Intell. Veh. Symp., 215–220.
Viola, P. and Jones, M. J. (2004). Robust real-time face detection. Int. J. Comput. Vision 57,2, 137–154.
Wu, Y.-J., Lian, F.-L, Huang, C.-P. and Chang, T.-H. (2007). Image processing techniques for lane-related information extraction and multi-vehicle detection in intelligent highway vehicles. Int. J. Automotive Technology 8,4, 513–520.
Xiao, J., Baker, S., Matthews, I. and Kanade, T. (2004). Real-time combined 2D+3D active appearance models. Proc. IEEE Conf. Comp. Vis. and Pattern Recog. 535–542.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Kim, S.Y., Choi, H.C., Won, W.J. et al. Driving environment assessment using fusion of in- and out-of-vehicle vision systems. Int.J Automot. Technol. 10, 103–113 (2009). https://doi.org/10.1007/s12239-009-0013-5
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12239-009-0013-5