Skip to main content
Log in

Detection of traffic signs based on eigen-color model and saliency model in driver assistance systems

  • Published:
International Journal of Automotive Technology Aims and scope Submit manuscript

Abstract

Traffic signs are made in many different shapes and colors that normally stand out from their environment for the purpose of enhancing their visibility to drivers. In most previous research, the methods for detecting traffic signs used shape and color information. However, the shape and color information of traffic signs is sensitive to features of the surroundings, such as the viewpoint and changes in the brightness or background, consequently making such information less reliable in terms of accuracy. To solve the problem, this paper presents a method for detecting traffic signs that is invariant under various environmental changes such as those in the appearance and illumination of traffic scenes. This method uses a saliency model to capture those features of traffic signs that are invariant to shading and shadow. Next, a traffic sign color model is used to extract the color features invariant under illumination changes. By using these models, the presented method can identify all possible shapes and colors for similarly representing the traffic signs. Finally, the traffic signs are detected through a so-called object verification process in which each traffic sign can be recognized by finding a region of overlap between two selected candidate regions. Automatic detection of traffic signs by implementing the presented approach has been examined with traffic scenes of some roads. In particular, the detection rate for traffic signs was 92%, while the processing time per frame was 0.27s.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Armingol, J. N., Escalera, A. de la, Hilario, C., Collado, J. M., Carrasco, J. P., Flores, M. J., Pastor, J. M. and Rodriguez, F. J. (2007). IVVI: Intelligent vehicle based on visual information. Robotic and Autonomous Systems 55,12, 904–916.

    Article  Google Scholar 

  • Arrospide, J. and Salgado, L. (2012). On-road visual tracking using markov chain monete carlo particle filtering with metropolis sampling. Int. J. Automative Tehnology 13,6, 955–961.

    Article  Google Scholar 

  • Arroyo, S. L., Femandez, A. S., Bascon, S. M., Jimenez, P. G. and Rodriguez, F. J. A. (2007). Road sing recognition using spatial dimension reduction methods based on PCA and SVMs, LNCS, 4507, 725–732.

    Google Scholar 

  • Borji, A., Ahmadabadi, M. N. and Araabi, B. N. (2010) Online learning of task-driven object-based visual attention control. Image and Vision Computing 28,7, 1130–1145.

    Article  Google Scholar 

  • Cherng, S., Fand, C. Y., Chen, C. P. and Chen, S. W. (2009). Critical motion detection of nearby moving vehicles in a vision-based driver-assistance system. IEEE Trans. ITS 10,1, 70–82.

    Google Scholar 

  • Ebner, M. (2004). A parallel algorithm for color constancy. J. Parallel and Distributed Computer, 64, 79–88.

    Article  Google Scholar 

  • Escalera, de la, Armingol, J. M. and Mata, M. (2003). Traffic sign recognition and analysis for intelligent vehicles. Image and Vision Computing, 21, 247–258.

    Article  Google Scholar 

  • Escalera, de la, Armingol, J. M., Pator, J. M. and Rodriguez, F. J. (2004). Visual sign information extraction and identification by deformable models for intelligent vehicles. IEEE Trans. ITS 5,2, 57–68.

    Google Scholar 

  • Estevez, L. and Kehtarnavaz, N. (1996). A real-time histographic approach to road sign recognition. IEEE Symp. Image Analysis and Interpretation, 95–100.

    Google Scholar 

  • Fang, C. Y., Fuh, C. S., Chen, S. W. and Yen, P. S. (2003a). A road sign recognition system based on dynamic visual model. IEEE Conf. CVPR, 750–755.

    Google Scholar 

  • Fang, C. Y., Chen, S. W. and Fuh, C. S. (2003b). Road-sign detection and tracking. IEEE Trans. VT, 52, 1329–1341.

    Google Scholar 

  • Fleyeh, H. (2006). Shadow and highlight invariant colour segmentation algorithm for traffic signs. IEEE Conf. Cybernetics and Intelligent Systems, 1–7.

    Google Scholar 

  • Fleyeh, H., Dougherth, M., Aenugula, D. and Baddam, S. (2007). Invariant road sign recognition with fuzzy ARTMAP and Zernike moments. Proc. IEEE IVS, 31–36.

    Google Scholar 

  • Fleyeh, H. and Zhao, P. (2008). A contour-based separation of vertically attached traffic signs. IEEE Proc. IE, 1811–1816.

    Google Scholar 

  • Fu, M. Y. and Huang, Y. S. (2010). A survey of traffic sign recognition. Proc. Int. Conf. Wavelet Analysis and Pattern Recognition, 119–124.

    Google Scholar 

  • Gao, X. W., Podladchikova, L., Shaposhnikov, D., Hong, K. and Shevtsova, N. (2006). Recognition of traffic signs based on their colour and shape features extracted using human vision models. J. Visual Comm. & Image Repres., 17, 6.

    Google Scholar 

  • Gonzalez, R. C., Woods, R. E. and Eddins, S. L. (2009). Digital Image Processing Using MATLAB. Sec. edn. Gatesmark Publishing, USA. 75–685.

    Google Scholar 

  • Harris, C. and Stephens, M. (1988). A combined corner and edge detector. Proc. Alvey Vision, 147–152.

    Google Scholar 

  • Itti, L., Koch, C. and Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. PAMI. 20,11, 1254–1259.

    Article  Google Scholar 

  • Jung, H. G., Lee, Y. H., Kang, H. J. and Kim, J. (2009). Semsor fuction-based lane detection for LKS+ACC system. Int. J. Automative Tehnology 10,2, 219–228.

    Article  Google Scholar 

  • Kehtarnavaz, N., Griswold, N. C. and Kang, D. S. (1993). Stop-sign recognition based on colour-shape processing. Machine Vision and Applications, 6, 206–208.

    Article  Google Scholar 

  • Kim, J. B. (2008). Real-time moving object recognition and tracking using the wavelet-based neural network and invariant moments. J. IEEK, 45SP,4, 304–315.

    Google Scholar 

  • Kim, S. Y., Choi, H. C., Won, W. J. and Oh, S. Y. (2009). Driving evironmont assessment using fusion of in-and out-of-vehicle vision systems. Int. J. Automative Tehnology 10,1, 103–113.

    Article  Google Scholar 

  • Kim, M. H. and Son, J. (2011). On-road asstssment of invehicle driving workload for older drivers: design guidelines for intelligent vehicles. Int. J. Automative Tehnology 12,2, 265–272.

    Article  Google Scholar 

  • Koch, C. and Ullman, S. (1985). Shifts in selective visual attention: Towards the underlying neural circuitry. Human Neurobiology, 4, 219–227.

    Google Scholar 

  • Kuo, Y. C., Pai, N. S. and Li, Y. F. (2011). Vision-based vehicle detection for a driver assistance system. Computer and Mathematics with Applications 61,8, 2096–2100.

    Article  Google Scholar 

  • Lee, B. and Kim, G. (2012). Robust detection of preceding vehicles in crowded traffic conditions. Int. J. Automative Tehnology 13,4, 671–678.

    Article  Google Scholar 

  • Li, R. Y. and Liu, X. (2010). Robust class similarity measure for traffic sign recognition. IEEE Trans. ITS 11,4, 846–855.

    Google Scholar 

  • Meur, L., Callet, P. L., Barba, D., Thoreau, D. and Francois, E. (2004). From low level perception to high level perception, a coherent approach for visual attention modeling. Proc. SPIE Human Vision and Electronic Imaging, 5292, 284–295.

    Google Scholar 

  • Ohta, Y. I., Kanade, T. and Sakai, T. (1980). Color information for region segmentation. Computer Graphics and Image Processing, 13, 222–241.

    Article  Google Scholar 

  • Oliva, A., Torralba, A., Castelhano, M. S. and Henderson, J. M. (2003). Top-down control of visual attention in object detection. Proc. ICIP, 253–256.

    Google Scholar 

  • Paclik, P. and Novovicova, J. (2000). Road sign classification without color information. Proc. Advanced School of Imaging and Computing, 1–7.

    Google Scholar 

  • Piccioli, G., Micheli, E. D., Parodi, P. and Campani, M. (1996). A robust method for sign detection and recognition. J. Image and Vision Computing, 14, 209–223.

    Article  Google Scholar 

  • Ruta, A., Li, Y. and Liu, X. (2010). Real-time traffic sign recognition from video by class-specific discriminative features. Pattern Recognition, 43, 416–430.

    Article  MATH  Google Scholar 

  • Schiekel, C. (1999). A fast traffic sign recognition algorithm for gray value images. LNCS, 1689, 588–595.

    Google Scholar 

  • Sun, Y. and Fisher, R. (2003). Object-based visual attention for computer vision. Artificial Intelligence, 146, 77–123.

    Article  MathSciNet  MATH  Google Scholar 

  • Treue, S. (2003). Visual attention: the where, what, how and why of saliency. Current Opining in Neurobiology 13,4, 428–432.

    Article  Google Scholar 

  • Tsai, L. W., Hsieh, J. W., Chuang, C. H., Tseng, Y. J., Fan, K. C. and Lee, C. C. (2008). Road sign detection using eigencolour. IET Computer Vision 2,3, 164–177.

    Article  Google Scholar 

  • Van de Weijer, J., Gevers, T. and Gijsenij, A. (2007). Edgebased color constancy. IEEE Trans. Image Processing 16,9, 2207–2214.

    Article  Google Scholar 

  • Van de Weijer, J., Gevers, T. and Geusebroek, J. M. (2005). Edge and corner detection by photometric quasiinvariants. IEEE Trans. PAMI 27,4, 625–630.

    Article  Google Scholar 

  • Van de Weijer, J., Gevers, T. and Bagdamov, A. D. (2006). Boosting color saliency in image feature detection. IEEE Trans. PAMI 28,1, 150–156.

    Article  Google Scholar 

  • Vitabile, S. and Sorbello, F. (1998). Pictogram road sign detection and understanding in outdoor scenes. Proc. SPIE, 3364, 359–370.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to J. B. Kim.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kim, J.B. Detection of traffic signs based on eigen-color model and saliency model in driver assistance systems. Int.J Automot. Technol. 14, 429–439 (2013). https://doi.org/10.1007/s12239-013-0047-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12239-013-0047-6

Key Words

Navigation