Abstract
We attempt to develop an autonomous mobile robot that supports workers in the warehouse to reduce their burden. The proposed robot acquires a state-action policy to circumvent obstacles and reach a destination via reinforcement learning, using a LiDAR sensor. Regarding the real-world applications of reinforcement learning, the policies previously learned under a simulation environment are generally diverted to real robot, owing to unexpected uncertainties inherent to simulation environments, such as friction and sensor noise. To address this problem, in this study, we proposed a method to improve the action control of an Omni wheel robot via transfer learning in an environment. In addition, as an experiment, we searched the route for reaching a goal in an real environment using transfer learning’s results and verified the effectiveness of the policy acquired.
Similar content being viewed by others
References
Gen A (2006) Development of outdoor cleaning robot SUBARUROBOHITER RS1. J Robot Soc Japan 24(2):153–155
Yuzi H (2006) An approach to development of a human-symbiotic robot. J Robot Soc Japan 24(3):296–299
Yoichi S et al (2006) Security services system using security robot Guardrobo. J Robot Soc Japan 24(3):308–311
Sutton RS, Barto AG (1998) Reinforcement Learning. The MIT Press, Cambridge
Sunayama R et al. (2007) Acquisition of collision avoidance behavior using reinforcement learning for autonomous mobile robots ROBOMEC lecture, pp.2A1-E19-1-4
Yuto U et al. (2020) Policy transfer from simulation to real world for autonomous control of an omni wheel robot, proceedings of 2020 IEEE 9th Global Conference on Consumer Electronics, pp.170-171, October 13-16
Watkins CJCH, Dayan P (1992) Technical note: Q-learning. Mach Learn 8:56–68
Laser Range Scanner Data Output Type/UTM-30LX Product Details |HOKUYO AUTOMATIC CO.,LTD. Avail-able: https://www.hokuyo-aut.co.jp/search/single.php?serial=21
Lazaric A (2012) Transfer in Reinforcement Learning: a Framework and a Survey, Reinforcement Learning-State of the art. Springer, Berlin, pp 143–173
Taylor ME, Stone P (2009) Transfer learning for reinforcement learning domains: a survey. J Mach Learn Res 10:1633–1685
Taylor ME (2009) Transfer in reinforcement learning domains. Studies in computational intelligence, vol. 216. Springer, New York
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work was presented in part at the 26th International Symposium on Artificial Life and Robotics (Online, January 21–23, 2021).
About this article
Cite this article
Ushida, Y., Razan, H., Ishizuya, S. et al. Using sim-to-real transfer learning to close gaps between simulation and real environments through reinforcement learning. Artif Life Robotics 27, 130–136 (2022). https://doi.org/10.1007/s10015-021-00713-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10015-021-00713-y