Advertisement

Learning Elements of Representations for Redescribing Robot Experiences

  • Laura Firoiu
  • Paul Cohen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1642)

Abstract

This paper presents our first efforts toward learning simple logical representations from robot sensory data and thus toward a solution for the perceptual grounding problem [2]. The elements of representations learned by our method are states that correspond to stages during the robot’s experiences, and atomic propositions that describe the states. The states are found by an incremental hidden Markov model induction algorithm; the atomic propositions are immediate generalizations of the probability distributions that characterize the states. The state induction algorithm is guided by the minimum description length criterion: the time series of the robot’s sensor values for several experiences are redescribed in terms of states and atomic propositions and the model that yields the shortest description (of both model and time series) is selected.

Keywords

Hide Markov Model Logical Representation Minimum Description Length Translational Velocity Atomic Proposition 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Paul R. Cohen and Mary Litch. What are contentful mental states? Dretske’s theory of mental content viewed in the light of robot learning and planning algorithms. To be presented at the Sixteenth National Conference on Artificial Intelligence, 1999.Google Scholar
  2. [2]
    S. Harnad. The symbol grounding problem. Physica D, 42:335–346, 1990.CrossRefGoogle Scholar
  3. [3]
    Teuvo Kohonen. Self-Organizing Maps. Springer, 1995.Google Scholar
  4. [4]
    Jean M. Mandler. How to build a baby: II. Conceptual primitives. Psychological Review, 99(4):587–604, 1992.CrossRefGoogle Scholar
  5. [5]
    J. J. Oliver and D. Hand. Introduction to minimum encoding inference. Technical Report 4-94, Statistics Dept., Open University, September 1994. TR 95/205 Computer Science, Monash University.Google Scholar
  6. [6]
    Lawrence R. Rabiner. A tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE, 77(2):257–285, 1989.CrossRefGoogle Scholar
  7. [7]
    Michael Rosenstein and Paul R. Cohen. Concepts from time series. In Proceedings of the Fifteenth National Conference on Artificial Intelligence, pages 739–745. AAAI Press, 1998.Google Scholar
  8. [8]
    Matthew D. Schmill, Tim Oates, and Paul R. Cohen. Learned models for continuous planning. In Proceedings of Uncertainty 99: The Seventh International Workshop on Artificial Intelligence and Statistics, pages 278–282, 1999.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Laura Firoiu
    • 1
  • Paul Cohen
    • 1
  1. 1.Computer Science DepartmentUniversity of Massachusetts at AmherstAmherst

Personalised recommendations