Skip to main content

An Innovative Vision System for Floor-Cleaning Robots Based on YOLOv5

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13256))

Abstract

The implementation of a robust vision system in floor-cleaning robots enables them to optimize their navigation and analysing the surrounding floor, leading to a reduction on power, water and chemical products’ consumption. In this paper, we propose a novel pipeline of a vision system to be integrated into floor-cleaning robots. This vision system was built upon the YOLOv5 framework, and its role is to detect dirty spots on the floor. The vision system is fed by two cameras: one on the front and the other on the back of the floor-cleaning robot. The goal of the front camera is to save energy and resources of the floor-cleaning robot, controlling its speed and how much water and detergent is spent according to the detected dirt. The goal of the back camera is to act as evaluation and aid the navigation node, since it helps the floor-cleaning robot to understand if the cleaning was effective and if it needs to go back later for a second sweep. A self-calibration algorithm was implemented on both cameras to stabilize image intensity and improve the robustness of the vision system. A YOLOv5 model was trained with carefully prepared training data. A new dataset was obtained in an automotive factory using the floor-cleaning robot. A hybrid training dataset was used, consisting on the Automation and Control Institute dataset (ACIN), the automotive factory dataset, and a synthetic dataset. Data augmentation was applied to increase the dataset and to balance the classes. Finally, our vision system attained a mean average precision (mAP) of 0.7 on the testing set.

Supported by the project “i-RoCS: Research and Development of an Intelligent Robotic Cleaning System” (Ref. POCI-01-0247-FEDER-039947), co-financed by COMPETE 2020 and Regional Operational Program Lisboa 2020, through Portugal 2020 and FEDER.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. ACIN and IPA datasets. https://goo.gl/6UCBpR. Accessed 17 Jan 2022

  2. IPA dirt detection. http://wiki.ros.org/ipa_dirt_detection. Accessed 12 Jan 2022

  3. Tzutalin. labelimg. https://github.com/tzutalin/labelImg. Accessed 14 Jan 2022

  4. Bormann, R., Wang, X., Xu, J., Schmidt, J.: DirtNet: visual dirt detection for autonomous cleaning robots. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 1977–1983. IEEE (2020)

    Google Scholar 

  5. Bormann, R., Weisshardt, F., Arbeiter, G., Fischer, J.: Autonomous dirt detection for cleaning in office environments. In: 2013 IEEE International Conference on Robotics and Automation, pp. 1260–1267. IEEE (2013)

    Google Scholar 

  6. Buslaev, A., Iglovikov, V.I., Khvedchenya, E., Parinov, A., Druzhinin, M., Kalinin, A.A.: Albumentations: fast and flexible image augmentations. Information 11(2), 125 (2020)

    Article  Google Scholar 

  7. Canedo, D., Fonseca, P., Georgieva, P., Neves, A.J.: A deep learning-based dirt detection computer vision system for floor-cleaning robots with improved data collection. Technologies 9(4), 94 (2021)

    Article  Google Scholar 

  8. Grünauer, A., Halmetschlager-Funek, G., Prankl, J., Vincze, M.: The power of GMMs: unsupervised dirt spot detection for industrial floor cleaning robots. In: Gao, Y., Fallah, S., Jin, Y., Lekakou, C. (eds.) TAROS 2017. LNCS (LNAI), vol. 10454, pp. 436–449. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-64107-2_34

    Chapter  Google Scholar 

  9. Jocher, G., et al.: ultralytics/yolov5: v5.0 - YOLOv5-P6 1280 models, AWS, Supervise.ly and YouTube integrations, April 2021. https://doi.org/10.5281/zenodo.4679653

  10. Kang, M.C., Kim, K.S., Noh, D.K., Han, J.W., Ko, S.J.: A robust obstacle detection method for robotic vacuum cleaners. IEEE Trans. Consum. Electron. 60(4), 587–595 (2014)

    Article  Google Scholar 

  11. Marcu, A., Licaret, V., Costea, D., Leordeanu, M.: Semantics through time: semi-supervised segmentation of aerial videos with iterative label propagation. In: Proceedings of the Asian Conference on Computer Vision (2020)

    Google Scholar 

  12. Neves, A.J.R., Trifan, A., Cunha, B.: Self-calibration of colormetric parameters in vision systems for autonomous soccer robots. In: Behnke, S., Veloso, M., Visser, A., Xiong, R. (eds.) RoboCup 2013. LNCS (LNAI), vol. 8371, pp. 183–194. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-44468-9_17

    Chapter  Google Scholar 

  13. Ramalingam, B., Lakshmanan, A.K., Ilyas, M., Le, A.V., Elara, M.R.: Cascaded machine-learning technique for debris classification in floor-cleaning robot application. Appl. Sci. 8(12), 2649 (2018)

    Article  Google Scholar 

  14. Ramalingam, B., Veerajagadheswar, P., Ilyas, M., Elara, M.R., Manimuthu, A.: Vision-based dirt detection and adaptive tiling scheme for selective area coverage. J. Sens. 2018 (2018)

    Google Scholar 

  15. Yang, M., Thung, G.: Classification of trash for recyclability status. CS229 Project Report 2016 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel Canedo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Canedo, D., Fonseca, P., Georgieva, P., Neves, A.J.R. (2022). An Innovative Vision System for Floor-Cleaning Robots Based on YOLOv5. In: Pinho, A.J., Georgieva, P., Teixeira, L.F., Sánchez, J.A. (eds) Pattern Recognition and Image Analysis. IbPRIA 2022. Lecture Notes in Computer Science, vol 13256. Springer, Cham. https://doi.org/10.1007/978-3-031-04881-4_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-04881-4_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-04880-7

  • Online ISBN: 978-3-031-04881-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics