ARM-VO: an efficient monocular visual odometry for ground vehicles on ARM CPUs


Localization is among the most important prerequisites for autonomous navigation. Vision-based systems have got great attention in recent years due to numerous camera advantages over other sensors. Reducing the computational burden of such systems is an active research area making them applicable to resource-constrained systems. This paper aims to propose and compare a fast monocular approach, named ARM-VO, with two state-of-the-art algorithms, LibViso2 and ORB-SLAM2, on Raspberry Pi 3. The approach is a sequential frame-to-frame scheme that extracts a sparse set of well-distributed features and tracks them in upcoming frames using Kanade–Lucas–Tomasi tracker. A robust model selection is used to avoid degenerate cases of fundamental matrix. Scale ambiguity is resolved by incorporating known camera height above ground. The method is open-sourced [] and implemented in ROS mostly using NEON C intrinsics while exploiting the multi-core architecture of the CPU. Experiments on KITTI dataset showed that ARM-VO is 4–5 times faster and is the only method that can work almost real-time on Raspberry Pi 3. It achieves significantly better results than LibViso2 and is ranked second after ORB-SLAM2 in terms of accuracy.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8


  1. 1.

    We didn’t experience a parallel OpenCV with OpenMP on Raspberry Pi. It seems that only TBB is used internally to parallelize OpenCV codes (and not OpenMP).

  2. 2.

    Stereo mode must be used because except LibViso2, other mentioned algorithms cannot recover scale in monocular mode.


  1. 1.

    Fuentes-Pacheco, J., Ruiz-Ascencio, J., Rendón-Mancha, J.M.: Visual simultaneous localization and mapping: a survey. Artif. Intell. Rev. 43(1), 55–81 (2015)

    Article  Google Scholar 

  2. 2.

    Mautz, R.: Indoor positioning technologies. Doctoral dissertation, Habilitationsschrift ETH Zürich (2012)

  3. 3.

    Ben-Afia, A., et al.: Review and classification of vision-based localisation techniques in unknown environments. IET Radar Sonar Navig. 8(9), 1059–1072 (2014)

    Article  Google Scholar 

  4. 4.

    Nistér, D., Naroditsky, O., Bergen, J.: Visual odometry. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. IEEE (2004)

  5. 5.

    Velez, G., Otaegui, O.: Embedding vision-based advanced driver assistance systems: a survey. IET Intell. Transp. Syst. 11(3), 103–112 (2016)

    Article  Google Scholar 

  6. 6.

    Klein, G., Murray, D.: Parallel tracking and mapping on a camera phone. In: 8th IEEE International Symposium on Mixed and Augmented Reality, 2009. ISMAR 2009. IEEE (2009)

  7. 7.

    Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: ISMAR 2007. 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, 2007. IEEE (2007)

  8. 8.

    Weiss, S.M.: Vision Based Navigation for Micro Helicopters. ETH Zurich, Zürich (2012)

    Google Scholar 

  9. 9.

    Sukvichai, K., et al.: Implementation of visual odometry estimation for underwater robot on ROS by using RaspberryPi 2. In: 2016 International Conference on Electronics, Information, and Communications (ICEIC). IEEE (2016)

  10. 10.

    Schöps, T., Engel, J., Cremers, D.: Semi-dense visual odometry for AR on a smartphone. In 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE (2014)

  11. 11.

    Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: large-scale direct monocular SLAM. In: European Conference on Computer Vision. Springer, Berlin (2014)

  12. 12.

    Forster, C., Pizzoli, M., Scaramuzza, D.: SVO: fast semi-direct monocular visual odometry. In: 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE (2014)

  13. 13.

    Forster, C., et al.: Svo: semidirect visual odometry for monocular and multicamera systems. IEEE Trans. Robot. 33(2), 249–265 (2017)

    Article  Google Scholar 

  14. 14.

    Scaramuzza, D., Fraundorfer, F.: Visual odometry [tutorial]. IEEE Robot. Autom. Mag. 18(4), 80–92 (2011)

    Article  Google Scholar 

  15. 15.

    Li, M., Mourikis, A.I.: Vision-aided inertial navigation for resource-constrained systems. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE (2012)

  16. 16.

    Shelley, M.: Monocular visual inertial odometry on a mobile device. Master’s thesis, Institut für Informatik, TU München, Germany (2014)

  17. 17.

    Aqel, M.O., et al.: Review of visual odometry: types, approaches, challenges, and applications. SpringerPlus 5(1), 1897 (2016)

    Article  Google Scholar 

  18. 18.

    Geiger, A., Ziegler, J., Stiller, C.: Stereoscan: dense 3d reconstruction in real-time. In: 2011 IEEE Intelligent Vehicles Symposium (IV). IEEE (2011)

  19. 19.

    Zhou, D., Dai, Y., Li, H.: Reliable scale estimation and correction for monocular visual odometry. In: 2016 IEEE Intelligent Vehicles Symposium (IV). IEEE (2016)

  20. 20.

    Armesto, L., Tornero, J., Vincze, M.: Fast ego-motion estimation with multi-rate fusion of inertial and vision. Int. J. Robot. Res. 26(6), 577–589 (2007)

    Article  Google Scholar 

  21. 21.

    Martinelli, A.: Vision and IMU data fusion: closed-form solutions for attitude, speed, absolute scale, and bias determination. IEEE Trans. Robot. 28(1), 44–60 (2012)

    Article  Google Scholar 

  22. 22.

    Nützi, G., et al.: Fusion of IMU and vision for absolute scale estimation in monocular SLAM. J. Intell. Robot. Syst. 61(1–4), 287–299 (2011)

    Article  Google Scholar 

  23. 23.

    Calonder, M., et al.: Brief: binary robust independent elementary features. In: European Conference on Computer Vision. Springer, Berlin (2010)

  24. 24.

    Rublee, E., et al.: ORB: an efficient alternative to SIFT or SURF. In: 2011 IEEE International Conference on Computer Vision (ICCV). IEEE (2011)

  25. 25.

    Leutenegger, S., Chli, M., Siegwart, R.Y.: BRISK: binary robust invariant scalable keypoints. In: 2011 IEEE International Conference on Computer Vision (ICCV). IEEE (2011)

  26. 26.

    Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: an open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017)

    Article  Google Scholar 

  27. 27.

    Leutenegger, S., et al.: Keyframe-based visual–inertial odometry using nonlinear optimization. Int. J. Robot. Res. 34(3), 314–334 (2015)

    Article  Google Scholar 

  28. 28.

    Lowe, D.G.: Object recognition from local scale-invariant features. In: The Proceedings of the Seventh IEEE International Conference on Computer Vision, 1999. IEEE (1999)

  29. 29.

    Bay, H., Tuytelaars, T., Van Gool, L.: Surf: speeded up robust features. In: European Conference on Computer Vision. Springer, Berlin (2006)

  30. 30.

    Engel, J., Koltun, V., Cremers, D.: Direct sparse odometry. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2017)

  31. 31.

    Barnes, B., et al.: Evaluation of feature detectors for KLT based feature tracking using the odroid u3. In: Proceedings of Australasian Conference on Robotics and Automation (2014)

  32. 32.

    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)

    MATH  Google Scholar 

  33. 33.

    Torr, P.H.: An assessment of information criteria for motion model selection. In: 1997 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Proceedings. IEEE (1997)

  34. 34.

    Márquez-Neila, P., et al.: Speeding-up homography estimation in mobile devices. J. Real Time Image Process. 11(1), 141–154 (2016)

    Article  Google Scholar 

  35. 35.

    Song, S.: Real-Time Monocular Large-scale Multicore Visual Odometry. University of California, San Diego (2014)

    Google Scholar 

  36. 36.

    Geiger, A., et al.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)

    Article  Google Scholar 

  37. 37.

    Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)

    Article  Google Scholar 

  38. 38.

    Yang, N., Wang, R., Gao, X., Cremers, D.: Challenges in monocular visual odometry: Photometric calibration, motion bias, and rolling shutter effect. IEEE Robot. Autom. Lett. 3(4), 2878–2885 (2018)

    Article  Google Scholar 

  39. 39.

    Engel, J., Usenko, V., Cremers, D.: A photometrically calibrated benchmark for monocular visual odometry. arXiv preprint arXiv:1607.02555 (2016)

Download references

Author information



Corresponding author

Correspondence to Ali Hosseininaveh Ahmadabadian.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zakaryaie Nejad, Z., Hosseininaveh Ahmadabadian, A. ARM-VO: an efficient monocular visual odometry for ground vehicles on ARM CPUs. Machine Vision and Applications 30, 1061–1070 (2019).

Download citation


  • Localization
  • Visual odometry
  • Raspberry Pi
  • ARM