Autonomous Discovery of Abstractions through Interaction with an Environment
Knowledge representation. How can we efficiently represent the knowledge learned in one task and reuse it for other tasks? This knowledge can take the form of a control policy learned to solve one task or a representation of structure in an environment.
Autonomous discovery of structure. My dissertation  focuses on autonomously identifying and creating useful temporal abstractions from an agent’s interaction with its environment.
Interaction of reinforcement learning and supervised learning methods. I am particularly interested in the combined use of these techniques to create more robust and autonomous learning systems.
Application of these techniques to robots, with a particular focus on robots assisting a human presence in space.
KeywordsReinforcement Learning Learning Agent Effective Problem Solver Supervise Learning Method Temporal Abstraction
- 1.Amy McGovern. Autonomous Discovery of Temporal Abstractions from Interaction with an Environment. PhD thesis, University of Massachusetts Amherst, 2002.Google Scholar
- 2.Amy McGovern and Andrew G. Barto. Accelerating reinforcement learning through the discovery of useful subgoals. In Proceedings of the 6th International Symposium on Artificial Intelligence, Robotics and Automation in Space: i-SAIRAS 2001, page electronically published, 2001.Google Scholar
- 3.Amy McGovern and Andrew G. Barto. Automatic discovery of subgoals in reinforcement learning using diverse density. In C. Brodley and A. Danyluk, editors, Proceedings of the 18th International Conference on Machine Learning ICML 2001, pages 361–368, San Francisco, CA, 2001. Morgan Kaufmann Publishers.Google Scholar