Advertisement

Environment Virtualization for Visual Localization and Mapping

  • David ValienteEmail author
  • Yerai Berenguer
  • Luis Payá
  • Nuno M. Fonseca Ferreira
  • Oscar Reinoso
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1023)

Abstract

Mobile robotics has become an essential content in many subjects within most Bachelor’s and Master’s degrees in engineering. Visual sensors have emerged as a powerful tool to perform reliable localization and mapping tasks for a mobile robot. Moreover, the use of images permits achieving other high level tasks such as object and people detection, recognition, or tracking. Nonetheless, our teaching experience confirms that students encounter many difficulties before dealing with visual localization and mapping algorithms. Initial stages such as data acquisition (images and trajectory), preprocessing or visual feature extraction, usually imply a considerable effort for many students. Consequently, the teaching process is prolonged, whereas the active learning and the students’ achievement are certainly affected. Considering these facts, we have implemented a Matlab software tool to generate an open variety of virtual environments. This allows students to easily obtain synthetic raw data, according to a predefined robot trajectory inside the designed environment. The virtualization software also produces a set of images along the trajectory for performing visual localization and mapping experiments. As a result, the overall testing procedure is alleviated and students report to take better advantage of the lectures and the practical sessions, thus demonstrating higher achievement in terms of comprehension of fundamental mobile robotics concepts. Comparison results regarding the achievement of students, engagement, satisfaction and attitude to the use of the tool, are presented.

Keywords

Virtual environments Visual localization Omnidirectional image Simulation 

Notes

Acknowledgements

This work has been partially supported by: the Spanish Government (DPI2016-78361-R, AEI/FEDER, UE); the Valencian Research Council and the European Social Fund (post-doctoral grant APOSTD/2017/028).

References

  1. 1.
    Payá, L., Gil, A., Reinoso, O.: A state-of-the-art review on mapping and localization of mobile robots using omnidirectional vision sensors. J. Sens. 2017, 1–20 (2017)CrossRefGoogle Scholar
  2. 2.
    Valiente, D., Payá, L., Jiménez, L.M., Sebastián, J.M., Reinoso, O.: Visual information fusion through bayesian inference for adaptive probability-oriented feature matching. Sensors 18(7), 2041 (2018)CrossRefGoogle Scholar
  3. 3.
    Cole, D., Newman, P.: Using laser range data for 3D SLAM in outdoor environments. In: IEEE ICRA, U.S.A., pp. 1556–1563 (2006)Google Scholar
  4. 4.
    Lee, S.J., Song, J.B.: A new sonar salient feature structure for EKF-based SLAM. In: IEEE IROS, pp. 5966–5971 (2010)Google Scholar
  5. 5.
    Ferreira, N.M.F., Freitas, E.D.C.: Robotics as multi-disciplinary learning: a summer course perspective. In: 2018 IEEE 16th International Conference on Industrial Informatics (INDIN), pp. 536–543, July 2018Google Scholar
  6. 6.
    Oliver, J., Toledo, R., Valderrama, E.: A learning approach based on robotics in computer science and computer engineering. In: IEEE EDUCON 2010 Conference, pp. 1343–1347, April 2010Google Scholar
  7. 7.
    López-Gay, R., Martínez Sáez, J., Martínez Torregrosa, J.: Obstacles to mathematization in physics: the case of the differential. Sci. Educ. 24, 591–613 (2015)CrossRefGoogle Scholar
  8. 8.
    Ferrerira, N.M.F., Freitas, E.D.C.: Computer applications for education on industrial robotic systems. Comput. Appl. Eng. Educ. 26(5), 1186–1194 (2018)CrossRefGoogle Scholar
  9. 9.
    Xuemei, L., Jiashu, C., Jinhu, L., Gang, X.: Innovative education activities with vision based robot navigation system. In: 2010 International Conference on Optics, Photonics and Energy Engineering (OPEE), vol. 2, pp. 505–507, May 2010Google Scholar
  10. 10.
    Gil, A., Juliá, M., Reinoso, O.: MRXT: the multi-robot exploration tool. Int. J. Adv. Robot. Syst. 12(29), 1–10 (2015)CrossRefGoogle Scholar
  11. 11.
    Tosello, E., Michieletto, S., Pagello, E.: Training master students to program both virtual and real autonomous robots in a teaching laboratory. In: 2016 IEEE Global Engineering Education Conference (EDUCON), pp. 621–630, April 2016Google Scholar
  12. 12.
    Saraiva, A.A., Barros, M.P., Nogueira, A.T., Fonseca Ferreira, N.M., Valente, A.: Virtual interactive environment for low-cost treatment of mechanical strabismus and amblyopia. Information 9(7), 175 (2018)CrossRefGoogle Scholar
  13. 13.
    Grisetti, G., Stachniss, C., Burgard, W.: Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Trans. Rob. 23(1), 34–46 (2007)CrossRefGoogle Scholar
  14. 14.
    Mur, R., Tards, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-d cameras. IEEE Trans. Rob. 33(5), 1255–1262 (2017)CrossRefGoogle Scholar
  15. 15.
    Liu, M., Siegwart, R.: Topological mapping and scene recognition with lightweight color descriptors for an omnidirectional camera. IEEE Trans. Rob. 30(2), 310–324 (2014)CrossRefGoogle Scholar
  16. 16.
    Menegatti, E., Maeda, T., Ishiguro, H.: Image-based memory for robot navigation using properties of omnidirectional images. Robot. Auton. Syst. 47(4), 251–267 (2004)CrossRefGoogle Scholar
  17. 17.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 1, pp. 886–893, June 2005Google Scholar
  18. 18.
    Friedman, A.: Framing pictures: the role of knowledge in automatized encoding and memory for gist. J. Exp. Psychol. Gen. 108, 316–55 (1979)CrossRefGoogle Scholar
  19. 19.
    Harris, C.G., Stephens, M.: A combined corner and edge detector. In: Proceedings of Alvey Vision Conference, Manchester, UK (1988)Google Scholar
  20. 20.
    Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004)CrossRefGoogle Scholar
  21. 21.
    Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Proceedings of the European Conference on Computer Vision, Graz, Austria (2006)Google Scholar
  22. 22.
    Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to SIFT or SURF. In: Proceedings of the 2011 International Conference on Computer Vision, Washington, DC, USA, pp. 2564–2571 (2011)Google Scholar
  23. 23.
    Valiente, D., Gil, A., Reinoso, O., Juliá, M., Holloway, M.: Improved omnidirectional odometry for a view-based mapping approach. Sensors 17(2), 325 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • David Valiente
    • 1
    Email author
  • Yerai Berenguer
    • 1
  • Luis Payá
    • 1
  • Nuno M. Fonseca Ferreira
    • 2
  • Oscar Reinoso
    • 1
  1. 1.Department of Systems Engineering and AutomationMiguel Hernández UniversityElche (Alicante)Spain
  2. 2.Department of Electrical Engineering (DEE), Engineering Institute of Coimbra (ISEC)Polytechnic Institute of Coimbra (IPC)CoimbraPortugal

Personalised recommendations