Abstract
The accumulation of litter is increasing in many places and is consequently becoming a problem that must be dealt with. In this paper, we present a manipulator robotic system to collect litter in outdoor environments. This system has three functionalities. Firstly, it uses colour images to detect and recognise litter comprising different materials. Secondly, depth data are combined with pixels of waste objects to compute a 3D location and segment three-dimensional point clouds of the litter items in the scene. The grasp in 3 Degrees of Freedom (DoFs) is then estimated for a robot arm with a gripper for the segmented cloud of each instance of waste. Finally, two tactile-based algorithms are implemented and then employed in order to provide the gripper with a sense of touch. This work uses two low-cost visual-based tactile sensors at the fingertips. One of them addresses the detection of contact (which is obtained from tactile images) between the gripper and solid waste, while another has been designed to detect slippage in order to prevent the objects grasped from falling. Our proposal was successfully tested by carrying out extensive experimentation with different objects varying in size, texture, geometry and materials in different outdoor environments (a tiled pavement, a surface of stone/soil, and grass). Our system achieved an average score of 94% for the detection and Collection Success Rate (CSR) as regards its overall performance, and of 80% for the collection of items of litter at the first attempt.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Availability of data and materials
The dataset D1 designed and used is publicly available at HOWA_dataset. Datasets D2 and D3 designed and used are not publicly available in this moment but they could be available from Institutional Repository of the University of Alicante or asking authors.
References
Chiang, C.-H.:Vision-based coverage navigation for robot trash collection task. In: IEEE Int. Conf. on Advanced Robotics and Intelligent Systems (ARIS),pp. 1–6 (2015). https://doi.org/10.1109/ARIS.2015.7158229.IEEE
Muthugala, M.V.J., Samarakoon, S.B.P., Elara, M.R.: Tradeoff between area coverage and energy usage of a self-reconfigurable floor cleaning robot based on user preference. IEEE Access 8, 76267–76275 (2020). https://doi.org/10.1109/ACCESS.2020.2988977
Zapata-Impata, B.S., Shah, V.,Singh, H.,Platt, R. :Autotrans: an autonomous open world transportation system. arXiv:1810.03400 (2018).https://doi.org/10.48550/arXiv:1810.03400
Sun, C., Orbik, J., Devin, C.M., Yang, B.H., Gupta, A., Berseth, G., Levine, S. :Fully autonomous real-world reinforcement learning with applications to mobile manipulation. In: 5th Conf. on Robot Learning (CoRL) (2021). https://doi.org/10.48550/2107.13545
Sultana, R., Adams, R.D., Yan, Y., Yanik, P.M., Tanaka, M.L. :Trash and recycled material identification using convolutional neural networks (cnn).In: SoutheastCon,pp. 1–8 (2020).https://doi.org/10.1109/SoutheastCon44009.2020.9249739
Lin, Y., Sun, Y.: Robot grasp planning based on demonstrated grasp strategies. Int. J. Robot. Res. 34(1), 26–42 (2015). https://doi.org/10.1177/0278364914555544
Zapata-Impata, B., Gil, P., Pomares, J., Medina, F. :Fast geometry-based computation of grasping points on three-dimensional point clouds.International Journal of Advanced Robotic Systems 16(2019). https://doi.org/10.1177/1729881419831846
del Pino, I., Muñoz-Bañon, M.Á., Cova-Rocamora, S., Contreras, M.Á., Candelas, F.A., Torres, F.: Deeper in blue. Journal of Intelligent & Robotic Systems 98, 207–225 (2020). https://doi.org/10.1007/s10846-019-00983-6
Lambeta, M., Chou, P.-W., Tian, S., Yang, B., Maloon, B., Most, V.R., Stroud, D., Santos, R., Byagowi, A., Kammerer, G., Jayaraman, D., Calandra, R.: Digit: A novel design for a low-cost compact high-resolution tactile sensor with application to in-hand manipulation.IEEE Robotics and Automation Letters 5(3),3838–3845 (2020). https://doi.org/10.1109/LRA.2020.2977257
Chandra, S.S., Kulshreshtha, M., Randhawa, P. :A review of trash collecting and cleaning robots. In: 9th Int. Conf. on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), pp. 1–5 (2021). https://doi.org/10.1109/ICRITO51393.2021.9596551
Bai, J., Lian, S., Liu, Z., Wang, K., Liu, D.: Deep learning based robot for automatically picking up garbage on the grass. IEEE Transactions on Consumer Electronics 64(3), 382–389 (2018). https://doi.org/10.1109/TCE.2018.2859629
Liu, J., Balatti, P., Ellis,K., Hadjivelichkov, D., Stoyanov,D., Ajoudani, A., Kanoulas, D. :Garbage collection and sorting with a mobile manipulator using deep learning and whole-body control. In: IEEE 20th Int. Conf. on Humanoid Robots (Humanoids),pp. 408–414 (2021).https://doi.org/10.1109/HUMANOIDS47582.2021.9555800
Mnyussiwalla, H., Seguin, P., Vulliez, P., Gazeau, J. :Evaluation and selection of grasp quality criteria for dexterous manipulation.Journal of Intelligent & Robotic Systems104, 20 (2022).https://doi.org/10.1007/s10846-021-01554-4
ten Pas, A., Gualtieri, M., Saenko, K., Platt, R. :Grasp pose detection in point clouds.The International Journal of Robotics Research 36(13-14),1455–1473 (2017). https://doi.org/10.1177/2F0278364917735594
Dong, S., Yuan, W., Adelson, E.H. :Improved gelsight tactile sensor for measuring geometry and slip.In: IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS),pp. 137–144 (2017). https://doi.org/10.1109/IROS.2017.8202149
Yuan, W., Dong, S., Adelson, E.H.: Gelsight: High-resolution robot tactile sensors for estimating geometry and force. Sensors 17(12), 2762 (2017). https://doi.org/10.3390/s17122762
Zhang, Y., Yuan, W., Kan, Z., Wang, M.Y. :Towards learning to detect and predict contact events on vision-based tactile sensors. In: 3rd Conf. on Robot Learning (CoRL), pp. 1395–1404 (2019). https://doi.org/10.48550/arxiv.1910.03973
Zhang, Y., Kan, Z., Tse, Y.A., Yang, Y., Wang, M.Y. :Fingervision tactile sensor design and slip detection using convolutional lstm network. (2018).https://doi.org/10.48550/arXiv.1810.02653
James, J.W., Pestell, N., Lepora, N.F. :Slip detection with a biomimetic tactile sensor.IEEE Robotics and Automation Letters 3(4),3340–3346 (2018).https://doi.org/10.1109/LRA.2018.2852797
Li, J., Dong, S., Adelson, E. :Slip detection with combined tactile and visual information. In: IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 7772–7777 (2018).https://doi.org/10.1109/ICRA.2018.8460495
James, J.W., Lepora, N.F.: Slip detection for grasp stabilization with a multifingered tactile robot hand. IEEE Transactions on Robotics 37(2), 506–519 (2021). https://doi.org/10.1109/TRO.2020.3031245
Tornero, P., Puente, S., Gil, P. :Detection and location of domestic waste for planning its collection using an autonomous robot. In: IEEE 8th Int. Conf. on Control, Automation and Robotics (ICCAR), Xiamen, China, pp. 138–144 (2022). https://doi.org/10.1109/ICCAR55106.2022.9782609
Hafiz, A.M., Bhat, G.M.: A survey on instance segmentation: state of the art. International Journal of Multimedia Information Retrieval 9(3), 171–189 (2020). https://doi.org/10.1007/s13735-020-00195-x
Gu, W., Bai, S., Kong, L.: A review on 2d instance segmentation based on deep neural networks. Image and Vision Computing 120,(2022). https://doi.org/10.1016/j.imavis.2022.104401
He, K., Gkioxari, G., Dollar, P., Girshick, R. :Mask r-cnn.In: IEEE/CVF Int. Conf. on Computer Vision (ICCV) (2017). https://doi.org/10.48550/arXiv.1703.06870
Bolya, D., Zhou, C., Xiao, F.,Lee, Y.J. :Yolact: Real-time instance segmentation. In: IEEE/CVF Int. Conf. on Computer Vision (ICCV), pp. 9157–9166 (2019). https://doi.org/10.48550/arXiv.1904.02689
Bolya, D., Zhou, C., Xiao, F., Lee, Y.J. :Yolact++ better real-time instance segmentation.IEEE Transactions on Pattern Analysis and Machine Intelligence 44(2),1108–1121 (2022). https://doi.org/10.1109/TPAMI.2020.3014297
Ren, S., He, K., Girshick, R., Sun, J. :Faster r-cnn: Towards real-time object detection with region proposal networks.IEEE Transactions on Pattern Analysis and Machine Intelligence 28(6),1137–1149 (2015).https://doi.org/10.1109/TPAMI.2016.2577031
He, K., Zhang, X., Ren, S., Sun, J. :Deep residual learning for image recognition. In: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) (2016).https://doi.org/10.1109/CVPR.2016.90
Redmon, J. :Darknet: Open Source Neural Networks in C.http://pjreddie.com/darknet/ (2013–2016)
De Gea, V., Puente, S.T., Gil, P. :Domestic waste detection and grasping points for robotic picking up. In: IEEE Int. Conf. on Robotics and Automation (ICRA)-Workshop: Emerging Paradigms for Robotic Manipulation: from the Lab to the Productive World (2021). https://doi.org/10.48550/arXiv.2105.06825
Garrido-Jurado, S., Muñoz-Salinas, R., Madrid-Cuevas, F.J., Mar’ın-Jiménez, M.J. :Automatic generation and detection of highly reliable fiducial markers under occlusion.Pattern Recognition 47(6),2280–2292 (2014).https://doi.org/10.1016/j.patcog.2014.01.005
Coleman, D., Sucan, I., Chitta, S., Correll, N. :Reducing the barrier to entry of complex robotic software: a moveit! case study. Journal of Software Engineering for Robotics 5(1), 3–16 (2014). https://doi.org/10.48550/arXiv.1404.3785
LaValle, S.M. :Rapidly-exploring random trees: A new tool for path planning.Technical Report 11,Computer Science Dept., Iowa State University (October 1998). http://lavalle.pl/papers/Lav98c.pdf
LaValle, S.M., Kuffner Jr, J.J. :Randomized kinodynamic planning.The International Journal of Robotics Research 20(5), 378–400 (2001). https://doi.org/10.1177/02783640122067453
Ward-Cherrier, B., Pestell, N., Cramphorn, L., Winstone, B., Giannaccini, M.E., Rossiter, J., Lepora, N.F.: The tactip family: Soft optical tactile sensors with 3d-printed biomimetic morphologies. Soft robotics 5(2), 216–227 (2018). https://doi.org/10.1089/soro.2017.0052
Pagoli, A., Chapelle, F., Corrales-Ramon, J.-A., Mezouar, Y., Lapusta, Y.: Large-area and low-cost force/tactile capacitive sensor for soft robotic applications. Sensors 22(11), 4083 (2022). https://doi.org/10.3390/s22114083
Kappassov, Z., Corrales, J.-A., Perdereau, V.: Touch driven controller and tactile features for physical interactions. Robotics and Autonomous Systems 123, 103332 (2020). https://doi.org/10.1016/j.robot.2019.103332
Castaño Amorós, J., Gil, P., Puente Méndez, S.T. :Touch detection with low-cost visual-based sensor. In: 2nd Int. Conf. on Robotics, Computer Vision and Intelligent Systems (ROBOVIS), pp. 136–142 (2021).https://doi.org/10.5220/0010699800003061
Simonyan, K., Zisserman, A. :Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv (2014). https://doi.org/10.48550/arXiv.1409.1556
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z. :Rethinking the inception architecture.In: Conf. on Computer Vision and Pattern Recognition (CVPR) (2016). https://doi.org/10.1109/CVPR.2016.308
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C. :Mobilenetv2: Inverted residuals and linear bottlenecks.In: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) (2018).https://doi.org/10.48550/arXiv.1801.04381
Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge. International Journal of Computer Vision 88(2), 303–338 (2010). https://doi.org/10.1007/s11263-009-0275-4
Salton, G., McGill, M.J. :Introduction to Modern Information Retrieval.McGraw-Hill, Inc.,USA (1986). https://dl.acm.org/doi/book/10.5555/576628
Hossin, M., Sulaiman, M.N.: A review on evaluation metrics for data classification evaluations. International Journal of Data Mining & knowledge management process 5(2), 1 (2015). https://doi.org/10.5121/ijdkp.2015.5201
Torralba, A., Russell, B.C., Yuen, J.: Labelme: Online image annotation and applications. Proceedings of the IEEE 98(8), 1467–1484 (2010). https://doi.org/10.1109/JPROC.2010.2050290
Kingma, D.P., Ba, J. :Adam: A method for stochastic optimization. In: 3rd Int. Conf. for Learning Representations (ICLR) (2014). https://doi.org/10.48550/arXiv.1412.6980
Acknowledgements
This research was funded by the Valencian Regional Government through the PROMETEO/2021/075 project. The computer facilities were provided by the Valencian Government and FEDER through the IDIFEFER/2020/003 project.
Funding
Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Research work was funded by the Valencian Regional Government and FEDER through the PROMETEO/2021/075 project. The computer facilities were provided through the IDIFEFER/2020/003 project.
Author information
Authors and Affiliations
Contributions
I.L.P and S.T.P contributed to the design and implementation of the vision system. J.C.A and P.G contributed to the design and implementation of the tactile system. I.L.P and J.C.A carried out the experiments in real environments. All authors analysed the results and wrote the manuscript.
Corresponding authors
Ethics declarations
Conflict of interest
The authors declare that they have no known competing finantial interests or personal relationships that could have appeared to influence the work reported in this paper.
Ethics approval
Not applicable.
Consent to participate
Not applicable.
Consent for publication
Authors gives the Publisher the permission to publish the Work in this journal.
Code availability
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Supplementary file 1 (mp4 96883 KB)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Páez-Ubieta, I.d.L., Castaño-Amorós, J., Puente, S.T. et al. Vision and Tactile Robotic System to Grasp Litter in Outdoor Environments. J Intell Robot Syst 109, 36 (2023). https://doi.org/10.1007/s10846-023-01930-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10846-023-01930-2