Advertisement

Exploiting qualitative knowledge to enhance skill acquisition

  • Cristina Baroglio
Part II: Regular Papers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1224)

Abstract

One of the most interesting problems faced by Artificial Intelligence researchers is to reproduce a capability typical of living beings: that of learning to perform motor tasks, a problem known as skill acquisition. A very difficult purpose because the overwhole behavior of an agent is the result of quite a complex activity, involving sensory, planning and motor processing. In this paper, I present a novel approach for acquiring new skills, named Soft Teaching, that is characterized by a learning by experience process, in which an agent exploits a symbolic, qualitative description of the task to perform, that cannot, however, be used directly for control purposes. A specific Soft Teaching technique, named Symmetries, was implemented and tested against a continuous-domained version of well-known pole-balancing.

Keywords

skill acquisition knowledge-based feedback adaptive agents 

References

  1. 1.
    C. Baroglio. Teaching by shaping. In the ICML Workshop on learning by induction vs. learning by demonstration, Tahoe City, CA, USA, 1995.Google Scholar
  2. 2.
    C. Baroglio, A. Giordana, M. Kaiser., M. Nuttin, and R. Piola. Learning controllers for industrial robots. Machine Learning, Spec. Iss. on Learning Robots, (23), 1996.Google Scholar
  3. 3.
    A.G. Barto, R.S. Sutton, and C.W. Anderson. Neuronlike adaptive elements that can solve difficult learning prolems. IEEE Trans. on SMC, SMC-13:834–836, 1986.Google Scholar
  4. 4.
    J.A. Clouse and P.E. Utgoff. A teaching method for reinforcement learning. In Proc. of the Machine Learning Conference MLC-92, pages 92–101, 1992.Google Scholar
  5. 5.
    B. Fritzke. Growing Cell Structures — a self-organized network for unsupervised and supervised learning. Neural Networks, 7(9), 1994.Google Scholar
  6. 6.
    V. Gullapalli. A stochastic reinforcement learning algorithm for learning real valued functions. Neural Networks, 3:671–692, 1990.Google Scholar
  7. 7.
    K. Hornik, M. Stinchcombe, and H. White. Multilayer feed-forward networks are universal approximators. Neural Networks, 2:359–366, 1989.Google Scholar
  8. 8.
    M. Kaiser and J. Kreuziger. Integration of symbolic and connectionist processing to ease robot programming and control. In ECAI'94 Workshop on Combining Symbolic and Connectionist Processing, pages 20–29, 1994.Google Scholar
  9. 9.
    M. Kaiser and F. Wallner. Using machine learning for enhancing mobile robots' skills. In IRTICS-93, 1993.Google Scholar
  10. 10.
    P. Katenkamp. Constructing controllers from examples. Master Thesis, Univ. of Karlsruhe, Germany, 1995.Google Scholar
  11. 11.
    L.J. Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning, 8:293–321, 1992.Google Scholar
  12. 12.
    M.A.F. McDonald. Approximate discounted dynamic programming is unreliable. Technical Report 94/6, Dept. of Comp. Sci., Univ. of Western Australia, Oct. 1994.Google Scholar
  13. 13.
    L.X. Wang and J.M. Mendel. Generating fuzzy rules by learning from examples. IEEE Trans. on SMC, SMC-22(6):1414–1427, November 1992.Google Scholar
  14. 14.
    C.J.C.H. Watkins and P. Dayan. Technical note: Q-learning. Machine Learning, 8:279–292, May 1992.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Cristina Baroglio
    • 1
  1. 1.Dip. di InformaticaUniversità degli StudiTorinoItaly

Personalised recommendations