Skip to main content

Robot at the Mirror: Learning to Imitate via Associating Self-supervised Models

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2023 (ICANN 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14254))

Included in the following conference series:

  • 962 Accesses

Abstract

We introduce an approach to building a custom model from ready-made self-supervised models via their associating instead of training and fine-tuning. We demonstrate it with an example of a humanoid robot looking at the mirror and learning to detect the 3D pose of its own body from the image it perceives. To build our model, we first obtain features from the visual input and the postures of the robot’s body via models prepared before the robot’s operation. Then we map their corresponding latent spaces by a sample-efficient robot’s self-exploration at the mirror. In this way, the robot builds the solicited 3D pose detector, which quality is immediately perfect on the acquired samples instead of obtaining the quality gradually. The mapping, which employs associating the pairs of feature vectors, is then implemented in the same way as the key–value mechanism of the famous transformer models. Finally, deploying our model for imitation to a simulated robot allows us to study, tune up and systematically evaluate its hyperparameters without the involvement of the human counterpart, advancing our previous research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    see the video at https://youtu.be/-3BVbU9BeRE.

  2. 2.

    see the video at https://youtu.be/_CBnCOnWRdY.

  3. 3.

    see the video at https://youtu.be/ZNkF5BTKOLU.

  4. 4.

    see the video at https://youtu.be/G6xWAKDMpsM.

References

  1. Bahl, S., Gupta, A., Pathak, D.: Human-to-robot imitation in the wild. arXiv preprint arXiv:2207.09450 (2022)

  2. Bandera, J.P., Rodriguez, J.A., Molina-Tanco, L., Bandera, A.: A survey of vision-based architectures for robot learning by imitation. Int. J. Humanoid Robot. 9, 1250006 (2012). world Scientific Publishing Company https://doi.org/10.1142/S0219843612500065

  3. Boucenna, S., Anzalone, S., Tilmont, E., Cohen, D., Chetouani, M.: Learning of social signatures through imitation game between a robot and a human partner. IEEE Trans. Auton. Mental Dev. 6(3), 213–225 (2014). https://doi.org/10.1109/TAMD.2014.2319861

    Article  Google Scholar 

  4. Caron, M., et al.: Emerging properties in self-supervised vision transformers. In: Proceedings of the International Conference on Computer Vision, ICCV (2021)

    Google Scholar 

  5. Dai, T., Liu, H., Anthony Bharath, A.: Episodic self-imitation learning with hindsight. Electronics 9(10) (2020). https://doi.org/10.3390/electronics9101742

  6. Dennett, D.C.: Kinds of Minds: Towards an Understanding of Consciousness. Weidenfeld & Nicolson, London (1996)

    Google Scholar 

  7. Garello, L., Rea, F., Noceti, N., Sciutti, A.: Towards third-person visual imitation learning using generative adversarial networks. In: IEEE International Conference on Development and Learning (ICDL), pp. 121–126 (2022)

    Google Scholar 

  8. Goodfellow, I.: Generative adversarial networks. Neural Inf. Process. Syst. (2016)

    Google Scholar 

  9. Heyes, C.: Where do mirror neurons come from? Neurosci. Biobehav. Rev. 34(4), 575–83 (2010)

    Article  Google Scholar 

  10. Kingma, D.P., Welling, M.: An introduction to variational autoencoders. Found. Trends Mach. Learn. 12(4), 307–392 (2019)

    Article  MATH  Google Scholar 

  11. Lúčny, A.: Building complex systems with agent-space architecture. Comput. Inf. 23(1), 1–36 (2004)

    MATH  Google Scholar 

  12. Lúčny, A.: iCubSim at the mirror. In: Proceedings of EUCognition. Vienna (2016)

    Google Scholar 

  13. Lúčny, A.: Towards one-shot learning via attention. In: CEUR Workshop Proceedings, ITAT 2022, pp. 4–11. 3226 (2022)

    Google Scholar 

  14. Marcel, V., OâĂŹRegan, J.K., Hoffmann, M.: Learning to reach to own body from spontaneous self-touch using a generative model. In: IEEE International Conference on Development and Learning (ICDL), pp. 328–335 (2022)

    Google Scholar 

  15. Petrovich, M., Black, M.J., Varol, G.: Action-conditioned 3D human motion synthesis with transformer VAE. In: International Conference on Computer Vision, ICCV (2021)

    Google Scholar 

  16. Pospíchal, J., Farkaš, I., Pecháč, M., Malinovská, K.: Modeling self-organized emergence of perspective in/variant mirror neurons in a robotic system. In: Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), pp. 278–283 (2019)

    Google Scholar 

  17. Rebrová K., Pecháč M., Farkaš I.: Towards a robotic model of the mirror neuron system. In: International Conference on Development and Learning and on Epigenetic Robotics, IEEE (2013)

    Google Scholar 

  18. Seker, M.Y., Ahmetoglu, A., Nagai, Y., Asada, M., Oztop, E., Ugur, E.: Imitation and mirror systems in robots through deep modality blending networks. Neural Netw. 146, 22–35 (2022). https://doi.org/10.1016/j.neunet.2021.11.004

    Article  Google Scholar 

  19. Sermanet, P., et al.: Time-contrastive networks: Self-supervised learning from video. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 1134–1141 (2018)

    Google Scholar 

  20. Tessitore, G., Prevete, R., Catanzariti, E., Tamburrini, G.: From motor to sensory processing in mirror neuron computational modelling. Biol. Cybern. 103(6), 471–485 (2010)

    Article  MATH  Google Scholar 

  21. Vaswani, A., et al.: Attention is all you need. In: 31st International Conference on Neural Information Processing Systems, ACM (2017)

    Google Scholar 

  22. Vernon, D., Metta, G., Sandini, G.: The iCub cognitive architecture: Interactive development in a humanoid robot. In: IEEE 6th International Conference on Development and Learning, pp. 122–127 (2007)

    Google Scholar 

  23. Šejnová, G., Štěpánová, K.: Feedback-driven incremental imitation learning using sequential VAE. In: IEEE International Conference on Development and Learning (ICDL), pp. 238–243 (2022)

    Google Scholar 

  24. Zahra, O., Tolu, S., Zhou, P., Duan, A., Navarro-Alarcon, D.: A bio-inspired mechanism for learning robot motion from mirrored human demonstrations. Frontiers Neurorobot. 16 (2022). https://doi.org/10.3389/fnbot.2022.826410

  25. Zambelli, M., Cully, A., Demiris, Y.: Multimodal representation models for prediction and control from partial information. Robot. Autonom. Syst. 123, 103312 (2020)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the EU-funded project TERAIS, no. 101079338, and partly by the national VEGA 1/0373/23 project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrej Lúčny .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lúčny, A., Malinovská, K., Farkaš, I. (2023). Robot at the Mirror: Learning to Imitate via Associating Self-supervised Models. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14254. Springer, Cham. https://doi.org/10.1007/978-3-031-44207-0_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44207-0_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44206-3

  • Online ISBN: 978-3-031-44207-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics