Abstract
This work addresses the use of image analysis and computer vision in the context of Advance Driver Assistance Systems (ADAS). Video-based systems stand as a powerful complement to classical active-sensor based approaches, as their being non-intrusive avoids interferences between sensors. They also allow a deeper understanding of the scene and offer better perspectives in terms of cost and flexibility. The main challenges in this field are the huge within class variability of vehicles, the complexity of the background induced by camera movement, and the effect of illumination and weather conditions on their appearance. Particularly, automatic vehicle detection and tracking from an on-board forward looking camera is considered in this work. A unified approach using statistical methods is proposed as the main contribution, so that the vehicles can be identified from the background irrespective of the scene conditions (e.g., weather, time of the day, own vehicle velocity) and vehicle appearance. The approach is divided into three primary tasks: vehicle hypothesis generation, hypothesis verification and vehicle tracking. Hypothesis generation is based on the use of a rectified domain in which the perspective is removed from the original image. A supervised classification strategy is adopted for the verification of the hypothesized vehicle locations, evaluating the performance of different methods for feature extraction. Finally, a Bayesian tracking framework using particle filters is proposed, in which a constant velocity dynamic model is used together with a multi-cue observation model based on appearance analysis on the rectified and original domains. The evaluation on real sequences demonstrates the robustness of the proposed framework. The exchange of information between processing blocks is fostered so that the maximum degree of adaptation to changes in the environment can be achieved and the computational cost is alleviated, rendering high detection rates in the verification phase, while reducing the number of tracking failures.
Resumen
El presente trabajo se centra en el uso de las técnicas análisis de imagen y visión artificial en el ámbito de los sistemas avanzados de ayuda a la conducción, comúnmente conocido por sus siglas en inglés, ADAS (Advanced Driver Assistance Systems). Los sistemas basados en visión artificial suponen un complemento muy potente a los sistemas clásicos que usan sensores activos, ya que su carácter no-intrusivo evita posibles interferencias entre vehículos, ofreciendo una mayor riqueza de información con un coste más asequible. Los principales retos para la visión por ordenador en este campo son la gran heterogeneidad de los vehículos y su entorno, el efecto de la iluminación y las condiciones meteorológicas en su apariencia. Particularmente, se aborda la detección y el seguimiento automático de vehículos con una cámara monocular embarcada y apuntando en la dirección de avance del vehículo. Se plantea como contribución principal un marco global con un enfoque estadístico que tenga en cuenta la variabilidad del entorno, de manera que los vehículos se puedan identificar independientemente de su apariencia y de las condiciones de la escena. El marco se compone de tres bloques: generación de hipótesis, verificación de hipótesis, y seguimiento. La generación de hipótesis, detección de regiones donde potencialmente hay vehículos, se basa en una estimación robusta de la perspectiva de la escena que permite generar una imagen rectificada del plano de la imagen. En la verificación de hipótesis se explora un método de aprendizaje supervisado evaluando el rendimiento de diferentes características. Finalmente, el seguimiento propuesto está constituido por un método Bayesiano basado en filtros de partículas, donde se define un modelo dinámico de velocidad constante, y se propone un modelo de observación combinando modelos de apariencia en el dominio original y transformado. Los resultados obtenidos sobre secuencias reales demuestran la robustez del marco propuesto. Se potencia el intercambio de información entre los diferentes bloques con objeto de obtener el mayor grado posible de adaptación a cambios en el entorno y reducir el coste computacional, permitiendo elevadas tasas de acierto en la fase de verificación y reduciendo el número de fallos en el seguimiento.
Similar content being viewed by others
Referencias
Arróspide, J., Salgado, L., & Nieto, M. (2012). Video analysis-based vehicle detection and tracking using an MCMC sampling framework. EURASIP Journal on Advances in Signal Processing, 2, 1–20.
Arumpalam, S., Maskell, S., & Gorden, N. (2002). A tutorial on particle filters for online non-linear/non-Gaussian Bayesian tracking. IEEE Transactions on Signal Processing, 50(2), 174–188.
Bertozzi, M., & Broggi, A. (1998). GOLD: a parallel real-time stereo vision system for generic obstacle and lane detection. IEEE Transactions on Image Processing, 7(1), 62–81.
Bertozzi, M., Broggi, A., Cellario, M., Fascioli, A., Lombardi, P., & Porta, M. (2002). Artificial vision in road vehicles. Proceedings of IEEE, 90(7), 1258–1271.
Bishop, C. M. (2006). Pattern recognition and machine learning. New York: Springer Verlag.
Comaniciu, D., Ramesh, V., & Meer, P. (2013). Kernel-based object tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(5), 564–577.
Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. IEEE Conference on Computer Vision and, Pattern Recognition, 1, 886–893.
Dore, A., Soto, M., & Regazzoni, C. S. (2010). An overview on Bayesian tracking for video analytics. IEEE Signal Processing Magazine, 27(5), 46–55.
GTI Vehicle Image Database. (2011) [Online]. Available: http://www.gti.ssr.upm.es/data/.
Kramm, S., & Bensrhair, A. (2012). Obstacle detection using sparse stereovision and clustering techniques. In Proc. IEEE Intelligent Vehicles Symposium (pp. 760–765).
Mao, L., Xie, M., Huang, Y., & Zhang, Y. (2010). Preceding vehicle detection using Histograms of Oriented Gradients. In Proc. International Conference on Communications, Circuits and Systems (pp. 354–358).
Nieto, M., Arróspide, J., & Salgado, L. (2011). Road environment modeling using robust perspective analysis & recursive Bayesian segmentation. Machine Vision and Applications, 22(6), 927–945.
Niknejad, H. T., Akihiro, T., Seiichi, M., & David, M. A. (2012). On-road multivehicle tracking using deformable object model and particle filter with improved likelihood estimation. IEEE Transactions on Intelligent Transportation Systems, 13(2), 748–758.
Sun, Z., Bebis, G., & Miller, R. (2006). On-road vehicle detection: a review. IEEE Transactions on Pattern Analisis and Machine Itelligence, 28(5), 694–711.
Sun, Z., Bebis, G., & Miller, R. (2006). Monocular precrash vehicle detection. features and classifiers. IEEE Transsctions on Image Processing, 15(7), 2019–2034.
Yilmaz, A., Javed, O., & Shah, M. (2006). Object tracking: a survey. ACM Computing Surveys, 38(4), 1–45.
Zhang, Z., Tan, T., Huang, K., & Wang, Y. (2012). Three-dimensional deformable-model-based localization and recognition of road vehicles. IEEE Transactions on Image Processing, 21(1), 1–13.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Arróspide, J., Salgado, L. Video based vehicle detection and tracking for driver assistance systems. Securitas Vialis 7, 41–49 (2015). https://doi.org/10.1007/s12615-014-9080-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12615-014-9080-0
Keywords
- Advanced Driver Assistance Systems (ADAS)
- Vehicle detection and tracking
- Video analysis
- Principal components analysis (PCA)
- Histogram of oriented gradients (HOG)
- Particle filters