Skip to main content
Log in

A Multi-Sensor-Based Terrain Perception Model for Locomotion Selection of Hybrid Mobile Robots

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

Autonomous mobile robots can travel and perform activities without human involvement. These robots are outfitted with various sensors, actuators, and onboard computing capabilities to detect their environment, make decisions, and execute tasks independently. The performance of such robots depends on how well they perceive their environments. Most sensors are employed towards the surroundings for decision-making and path planning. This paper tries a different approach that complements environment perception to increase the robot’s perception ability. The suggested approach uses standard sensors like cameras, light sensors, distance sensors, accelerometers, and gyroscope sensors in a compact sensor box meant to face the ground to provide terrain perception ability to the robots. The sensor box uses a supervised learning-based concept to categorize different types of terrain and estimate how practicable they are in terms of percentage, i.e., the terrain perception percentage. The experimental outcomes depict that the terrain perception model gives a reliable indication of the practicability of terrains, thereby increasing the perception abilities of robots.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Data Availability

The presented manuscript has no associated data and does not contemplate future depositions.

References

  1. Sakai T, Nagai T. Explainable autonomous robots: a survey and perspective. Adv Robot. 2022;36(5–6):219–38. https://doi.org/10.1109/LRA.2022.3051880.

    Article  Google Scholar 

  2. Rubio F, Valero F, Llopis-Albert C. A review of mobile robots: concepts, methods, theoretical framework, and applications. Int J Adv Rob Syst. 2019;16(2):1729881419839596. https://doi.org/10.1177/1729881419839596.

    Article  Google Scholar 

  3. Choudhary A, Kobayashi Y, Arjonilla FJ, Nagasaka S, Koike M. Evaluation of mapping and path planning for non-holonomic mobile robot navigation in narrow pathway for agricultural application. In: 2021 IEEE/SICE International Symposium on System Integration (SII). 2021. p. 17–22.

  4. Zhuang Y, Wang Q, Shi M, Cao P, Qi L, Yang J. Low-power centimeter-level localization for indoor mobile robots based on ensemble Kalman smoother using received signal strength. IEEE Internet Things J. 2019;6(4):6513–22. https://doi.org/10.1109/JIOT.2019.2903090.

    Article  Google Scholar 

  5. Niloy MA, Shama A, Chakrabortty RK, Ryan MJ, Badal FR, Tasneem Z, et al. Critical design and control issues of indoor autonomous mobile robots: a review. IEEE Access. 2021;9:35338–70. https://doi.org/10.1109/ACCESS.2021.3060002.

    Article  Google Scholar 

  6. Diab M, Akbari A, Ud Din M, Rosell J. PMK—a knowledge processing framework for autonomous robotics perception and manipulation. Sensors. 2019;19(5):1166. https://doi.org/10.3390/s19051166.

    Article  Google Scholar 

  7. Adarsh P, Rathi P, Kumar M. YOLO v3-tiny: object detection and recognition using one stage improved model. In: 2020 6th international conference on advanced computing and communication systems (ICACCS). 2020.

  8. Melotti G, Premebida C, Gonçalves N. Multimodal deep-learning for object recognition combining camera and LIDAR data. In: 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC). 2020.

  9. Hofmarcher M, et al. Visual scene understanding for autonomous driving using semantic segmentation. Explain AI Interpret Explain Vis Deep Learn. 2019. https://doi.org/10.1007/978-3-030-22526-7_23.

    Article  Google Scholar 

  10. Sakaridis C, Dai D, Van Gool L. ACDC: the adverse conditions dataset with correspondences for semantic driving scene understanding. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.

  11. Hou J et al. Exploring data-efficient 3D scene understanding with contrastive scene contexts. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.

  12. Mungalpara M et al. Deep convolutional neural networks for scene understanding: a study of semantic segmentation models. In: 2021 International Conference on Artificial Intelligence and Machine Vision (AIMV), IEEE. 2021.

  13. Sumikura S, Shibuya M, Sakurada K. OpenVSLAM: a versatile visual SLAM framework. In: Proceedings of the 27th ACM International Conference on Multimedia. 2019.

  14. Teed Z, Deng J. Droid-SLAM: deep visual SLAM for monocular, stereo, and RGB-D cameras. Adv Neural Inf Process Syst. 2021;34:16558–69.

    Google Scholar 

  15. Li, et al. DP-SLAM: a visual SLAM with moving probability towards dynamic environments. Inf Sci. 2021;556:128–42.

    Article  Google Scholar 

  16. Cui L, Ma C. SOF-SLAM: a semantic visual SLAM for dynamic environments. IEEE Access. 2019;7:166528–39.

    Article  Google Scholar 

  17. Jayasuriya M, Ranasinghe R, Dissanayake G. Active perception for outdoor localization with an omnidirectional camera. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE. 2020.

  18. Sun Y, Liu M, Meng MQ-H. Active perception for foreground segmentation: an RGB-D data-based background modeling method. IEEE Trans Autom Sci Eng. 2019;16(4):1596–609.

    Article  Google Scholar 

  19. Tallamraju R, et al. Active perception-based formation control for multiple aerial vehicles. IEEE Robot Autom Lett. 2019;4(4):4491–8.

    Article  Google Scholar 

  20. Zhou Y, et al. Multi-robot collaborative perception with graph neural networks. IEEE Robot Autom Lett. 2022;7(2):2289–96.

    Article  Google Scholar 

  21. Queralta JP, et al. Collaborative multi-robot search and rescue: planning, coordination, perception, and active vision. IEEE Access. 2020;8:191617–43.

    Article  Google Scholar 

  22. Lei Z et al. Latency-aware collaborative perception. In: European Conference on Computer Vision. Cham: Springer Nature Switzerland; 2022.

  23. Cheng Y, et al. A novel radar point cloud generation method for robot environment perception. IEEE Trans Rob. 2022;38(6):3754–73.

    Article  Google Scholar 

  24. Wu Y, et al. Ground-penetrating radar-based underground environmental perception radar for robotic system. Int J Adv Rob Syst. 2020;17(2):1729881420921642.

    Google Scholar 

  25. Davoli L, et al. Ultrasonic-based environmental perception for mobile 5G-oriented XR applications. Sensors. 2021;21(4):1329.

    Article  Google Scholar 

  26. Huang Z, et al. Multi-modal sensor fusion-based deep neural network for end-to-end autonomous driving with scene understanding. IEEE Sens J. 2020;21(10):11781–90.

    Article  Google Scholar 

  27. Zhuangwei Z et al. Perception-aware multi-sensor fusion for 3D LiDAR semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.

  28. John V, Mita S. RVNet: Deep sensor fusion of monocular camera and radar for image-based obstacle detection in challenging environments. In: Image and Video Technology: 9th Pacific-Rim Symposium, PSIVT 2019, Springer International Publishing, 2019.

  29. Lin K, et al. Multi-sensor fusion for body sensor network in medical human–robot interaction scenario. Inf Fusion. 2020;57:15–26.

    Article  Google Scholar 

  30. Yan Z, et al. Robot perception of static and dynamic objects with an autonomous floor scrubber. Intel Serv Robot. 2020;13(3):403–17.

    Article  Google Scholar 

Download references

Acknowledgements

This article has not been submitted to any of the journals and conferences anywhere before.

Funding

This work was supported by REVA University, Bangalore, INDIA under the University seed funding granted on 28-02-2022 [Grant No: RU:EST:EC:2022/41].

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the write-up of the paper. Kouame Yann Olivier Akansie wrote the first and final drafts of the manuscript, and all authors commented on previous versions. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Kouame Yann Olivier Akansie.

Ethics declarations

Conflict of Interest

The author declares no conflict of interest.

Informed Consent

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Advances in Computational Approaches for Image Processing, Wireless Networks, Cloud Applications and Network Security” guest edited by P. Raviraj, Maode Ma and Roopashree H R.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Akansie, K.Y.O., Biradar, R.C. & Karthik, R. A Multi-Sensor-Based Terrain Perception Model for Locomotion Selection of Hybrid Mobile Robots. SN COMPUT. SCI. 5, 512 (2024). https://doi.org/10.1007/s42979-024-02858-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-024-02858-6

Keywords

Navigation