Automatic Motion Classification for Advanced Driver Assistance Systems

  • Alok Desai
  • Dah-Jye LeeEmail author
  • Shreeya Mody
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9475)


Many computer vision applications need motion detection and analysis. In this research, a newly developed feature descriptor is used to find sparse motion vectors. Based on the resulting sparse motion field the camera motion is detected and analyzed. Statistical analysis is performed, based on polar representation of motion vectors. Direction of motion is classified, based on the statistical analysis results. The motion field further is used for depth analysis. This proposed method is evaluated with two video sequences under image deformation: illumination change, blurring and camera movement (i.e. viewpoint change). These video sequences are captured from a moving camera (moving/driving car) with moving objects.


  1. 1.
    Fraundorfer, F., Scaramuzza, D.: Visual odometry: Part II: matching, robustness, optimization, and applications. Robot. Autom. Mag., IEEE 19(2), 78–90 (2012)CrossRefGoogle Scholar
  2. 2.
    Chen, L., Wei, H., Ferryman, J.: A survey of human motion analysis using depth imagery. Pattern Recognit. Lett. 34(15), 1995–2006 (2013)CrossRefGoogle Scholar
  3. 3.
    Civera, J., Grasa, O.G., Davison, A.J., Montiel, J.M.: 1-point RANSAC for extended Kalman filtering: application to real-time structure from motion and visual odometry. J. Field Robot 27(5), 609–631 (2010)CrossRefGoogle Scholar
  4. 4.
    Xu, Z., Wu, H.R.: Smart video surveillance system. In: 2010 IEEE International Conference on Industrial Technology (ICIT), pp. 285–290 (2010)Google Scholar
  5. 5.
    Desouza, G., Kak, A.: Vision for mobile robot navigation: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 24(2), 237–267 (2002)CrossRefGoogle Scholar
  6. 6.
    Onkarappa, N., Domingo Sappa, A.: Speed and texture: an empirical study on optical-flow accuracy in ADAS scenarios. IEEE Trans. Intell. Transp. Syst. 15(1), 136–147 (2014)CrossRefGoogle Scholar
  7. 7.
    Onkarappa, N.: Optical flow in driver assistance systems. Ph.D. dissertation, Universitat Autònoma de Barcelona (2013)Google Scholar
  8. 8.
    Desai, A., Lee, D.J., Ventura, D.: An efficient feature descriptor based on synthetic basis functions and uniqueness matching strategy. Comput. Vis. Image Underst. 142, 37–49 (2016)CrossRefGoogle Scholar
  9. 9.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004)CrossRefGoogle Scholar
  10. 10.
    Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006, Part I. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  11. 11.
    Ke, Y., Sukthankar, R.: PCA-SIFT: a more distinctive representation for local image descriptors. In: Proceedings of the 2004 In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, vol. 2, pp. II-506–II-513 (2004)Google Scholar
  12. 12.
    Hua, G., Brown, M., Winder, S.: Discriminant embedding for local image descriptors. In: IEEE 11th International Conference on Computer Vision, pp. 1–8 (2007)Google Scholar
  13. 13.
    Anderson, H.: Both lazy and efficient:Compressed sensing and applications. Technical report, (Sandia National Laboratories), Report number: 2013–7521P (2013)Google Scholar
  14. 14.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the KITTI vision benchmark suite. In 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3354–3361 (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Department of Electrical and Computer EngineeringBrigham Young UniversityProvoUSA

Personalised recommendations