Comparing the Learning Processes of Cognitive Distance Learning and Search Based Agent
Our proposed cognitive distance learning agent generates sequence of actions from a start state to goal state in problem state space. This agent learns cognitive distance (path cost) of arbitrary combination of two states. The action generation at each state is selection of next state that has minimum cognitive distance to the goal.
In this paper, we investigate a leraning process of the agent by a computer simulation inatile world state space. An average search cost is more reduced more the prior learning term is long and our problem solve is familiar to the environment. After enough learning process, an average search cost of prposed method is reduced to 1/20 from that of conventional search method.
Unable to display preview. Download preview PDF.
- A. Newell and H. A. Simon: GPS, a program that simulates human thought, In H. Billing (Ed.), Lernede Automaten, 109–124 (1961).Google Scholar
- Seiji Yamada: Reactive Planning, Japanese J. Artificial Intelligence, 8, 6, 729–735 (1993).Google Scholar
- Richard S. Sutton and Andrew G. Barto: Reinforcement Learning: An Introduction, Adaptive Computation and Machine Learning. MIT Press (1988).Google Scholar
- L. P. Kaelbling and et al: Reinforcement Learning: A Survey. J. Artificial Intelligence Research. 4 237–285 (1996).Google Scholar
- H. Yamakawa and et al: Proposing Problem Solver using Cognitive Distance, Proc. MACC2000, (2000).Google Scholar