Robot learning consists of a multitude of machine learning approaches, particularly reinforcement learning, inverse reinforcement learning and regression methods. These methods have been adapted sufficiently to domain to achieve real-time learning in complex robot systems such as helicopters, flapping-wing flight, legged robots, anthropomorphic arms, and humanoid robots.
Robot Skill Learning Problems
In classical artificial intelligence-based robotics app-roaches, scientists attempted to manually generate a set of rules and models that allows the robot systems to sense and act in the real world. In contrast, robot learning has become an interesting problem in robotics as (1) it may be prohibitively hard to program a robot for many tasks, (2) not all situations, as well as goals, may be foreseeable, and (3) real-world environments are often nonstationary (Connell and Mahadevan, 1993). Hence, future robots need to be able to adapt to the real world.
In comparison to many...
- Recently, several special issues (Morimoto et al., 2010; Peters and Ng, 2009) and books (Sigaud, 2010) have covered the domain of robot learning. The classical book (Connell and Mahadevan, 1993) is interesting nearly 20 years after its publication. Additional special topics are treated in Apolloni et al. (2005) and Thrun et al. (2005).Google Scholar
- Apolloni, B., Ghosh, A., Alpaslan, F. N., Jain, L. C., & Patnaik, S. (2005). Machine learning and robot perception. Studies in computational intelligence (Vol. 7). Berlin: Springer.Google Scholar
- Farrell, J. A., & Polycarpou, M. M. (2006). Adaptive approximation based control. Adaptive and learning systems for signal processing, communications and control series. Hoboken: John Wiley.Google Scholar
- Ham, J., Lin, Y., & Lee, D. D. (2005). Learning nonlinear appearance manifolds for robot localization. In International conference on intelligent robots and Systems, Takamatsu, Japan.Google Scholar
- Jenkins, O., Bodenheimer, R., & Peters, R. (2006). Manipulation manifolds: Explorations into uncovering manifolds in sensory-motor spaces (8 pages). In International conference on development and learning, Bloomington, INGoogle Scholar
- Kober, J., & Peters, J. (2009). Policy search for motor primitives in robotics. In Advances in neural information processing systems 22. Cambridge: MIT Press.Google Scholar
- Peters, J., & Ng, A. (2009). Special issue on robot learning. Autonomous Robots, 27(1–2):1–144.Google Scholar
- Schaal, S., Atkeson, C. G., & Vijayakumar, S. Scalable techniques from nonparameteric statistics for real-time robot learning. Applied Intelligence, 17(1):49–60.Google Scholar
- Sigaud, O., & Peters, J. (2010). From motor learning to interaction learning in robots. Studies in computational intelligence (Vol. 264). Heidelberg: Springer.Google Scholar
- Tedrake, R., Zhang, T. W., & Seung, H. S. (2004). Stochastic policy gradient reinforcement learning on a simple 3d biped. In Proceedings of the IEEE international conference on intelligent robots and systems (pp. 2849–2854). IROS 2004, Sendai, Japan.Google Scholar