Heuristic Rule Induction for Decision Making in Near-Deterministic Domains
A large corpus of work in artificial intelligence focuses on planning and learning in arbitrarily stochastic domains. However, these methods require significant computational resources (large transition models, huge amounts of samples) and the resulting representations can hardly be broken into easily understood parts, even for deterministic or near-deterministic domains. This paper focuses on a rule induction method for (near-)deterministic domains, so that an unknown world can be described by a set of short rules with well-defined preconditions and effects given a brief interaction with the environment. The extracted rules can then be used by the agent for decision making. We have selected a multiplayer online game based on the SMAUG MUD server as a model of a near-deterministic domain and used our approach to infer rules about the world, generalising from a few examples. The agent starts with zero knowledge about the world and tries to explain it by generating hypotheses, refining them as they are refuted. The end result is a set of a few meaningful rules that accurately describe the world. A simple planner using these rules was able to perform near optimally in a fight scenario.
Unable to display preview. Download preview PDF.
- 1.Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: A survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)Google Scholar
- 2.Wexler, M.: Embodied induction: Learning external representations. In: AAAI Fall Symposium, pp. 134–138. AAAI Press, Menlo Park (1996)Google Scholar
- 3.Salzberg, S.: Heuristics for inductive learning. In: Proceedings of the International Joint Conference on Artificial Intelligence, pp. 603–609 (1985)Google Scholar