How to Design Good Rules for Multiple Learning Agents in Scheduling Problems?

  • Keiki Takadama
  • Masakazu Watabe
  • Katsunori Shimohara
  • Shinichi Nakasuka
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1733)


This paper explores how to design good rules for multiple learning agents in scheduling problems and investigates what kind of factors are required to find good solutions with small computational costs. Through intensive simulations of crew task scheduling in a space shuttle/station, the following experimental results have been obtained: (1) an integration of (a) a solution improvement factor, (b) an exploitation factor, and (c) an exploration factor contributes to finding good solutions with small computational costs; and (2) the condition part of rules, which includes flags indicating overlapping, constraints, and same situation conditions, supports the contribution of the above three factors.


rule design scheduling problem multiple learning agents organizational learning learning classifier system 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    C. Argyris and D.A. Schön: Organizational Learning, Addison-Wesley, 1978. 131Google Scholar
  2. 2.
    T.R. Balch and R.C. Arkin: “Communication in reactive multiagent robotic systems,” Autonomous Robots, Vol. 1, No. 1, pp. 27–52, 1995. 127CrossRefGoogle Scholar
  3. 3.
    P. Brucker: Scheduling Algorithm, Springer-Verlag, 1995. 127Google Scholar
  4. 4.
    M.D. Cohen and L.S. Sproull: Organizational Learning, SAGE Publications, 1995. 131Google Scholar
  5. 5.
    A. Collinot, A. Drogoul, and P. Benhamou: “Agent Oriented Design of a Soccer Robot Team,” The Second International Conference on Multi-Agent Systems (ICMAS’96), pp. 41–47, 1996. 127Google Scholar
  6. 6.
    S. Fujita and V.R. Lesser: “Centralized Task Distribution in the Presence of Uncertainty and Time Deadlines,” The Second International Conference on Multiagent Systems (ICMAS’96), pp. 95–102, 1996. 128Google Scholar
  7. 7.
    D.E. Goldberg: Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, 1989. 127, 131Google Scholar
  8. 8.
    J.J. Grefenstette: “Credit Assignment in Rule Discovery Systems Based on Genetic Algorithms,” Machine Learning, Vol. 3. pp. 225–245, 1988. 133Google Scholar
  9. 9.
    J.H. Holland and J. Reitman: “Cognitive Systems Based on Adaptive Algorithms,” in Pattern Directed Inference Systems, D.A. Waterman and F. Hayes-Roth (Eds.), Academic Press, 1978. 131Google Scholar
  10. 10.
    H. Iima, T. Hara, N. Ichimi, and N. Sonnomiya: “Autonomous Decentralized Scheduling Algorithm for a Job-Shop Scheduling Problem with Complicated Constraints,” The 4th International Symposium on Autonomous Decentralized Systems (ISADS’99), pp. 366–369, 1999. 128Google Scholar
  11. 11.
    J.G. March: “Exploration and Exploitation in Organizational Learning,” Organizational Science, Vol. 2, No. 1, pp. 71–87, 1991. 131, 138MathSciNetCrossRefGoogle Scholar
  12. 12.
    M.J. Mataric: “Designing and Understanding Adaptive Group Behavior,” Adaptive Behavior, Vol. 4, No. 1, pp. 51–80, 1995. 127CrossRefGoogle Scholar
  13. 13.
    I.H. Osman and J.P. Kerry: Meta-Heuristics: Theory and Applications, Kluwer Academic Publishers, 1996. 127Google Scholar
  14. 14.
    S.J. Russell and P. Norving: Artificial Intelligence: A Modern Approach, Prentice-Hall International, 1995. 132Google Scholar
  15. 15.
    S. Sen and E.H. Durfee: “Unsupervised Surrogate Agents and Search Bias Change in Flexible Distributed Scheduling,” The First International Conference on Multiagent Systems (ICMAS’95), pp. 336–343, 1995. 128Google Scholar
  16. 16.
    K. Takadama, S. Nakasuka, and T. Terano: “Multiagent Reinforcement Learning with Organizational-Learning Oriented Classifier System,” The IEEE 1998 International Conference On Evolutionary Computation (ICEC’98), pp. 63–68, 1998. 133Google Scholar
  17. 17.
    K. Takadama, T. Terano, K. Shimohara, K. Hori, and S. Nakasuka. “Can Multiagents Learn in Organization? Analyzing Organizational-Learning Oriented Classifier System?,” The 16th International Joint Conference on Artificial Intelligence (IJCAI’99) workshop on Agents Learning about, from and with other Agents, 1999. 128Google Scholar
  18. 18.
    K. Takadama, T. Terano, K. Shimohara, K. Hori, and S. Nakasuka: “Making Organizational Learning Operational: Implication from Learning Classifier System,” Computational and Mathematical Organization Theory (CMOT), Vol. 5, No. 3, pp. 229–252, 1999, to appear. 128, 131, 132, 133zbMATHCrossRefGoogle Scholar
  19. 19.
    H. Tamaki, M. Ochi, and M. Araki: “Introduction of a State Feedback Structure for Adjusting Priority Rules in Production Scheduling,” Transaction of SICE (the Society of Instrument and Control Engineers), Vol. 35, No. 3, pp. 428–434, 1999, (in Japanese). 128Google Scholar
  20. 20.
    W. Zhang and T.G. Dietterich: “A Reinforcement Learning Approach to Job-shop Scheduling,” The 14th International Joint Conference on Artificial Intelligence (IJCAI’95), pp. 1114–1120, 1995. 128Google Scholar
  21. 21.
    M. Zweben, B. Daun, and M. Deale: “Scheduling and Rescheduling with Iterative Reaper,” In Intelligent Scheduling, M. Zweben and M.S. Fox (Eds.), Morgan Kaufmann Publishers, Chapter 8, pp. 241–255, 1994. 128Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Keiki Takadama
    • 1
  • Masakazu Watabe
    • 1
  • Katsunori Shimohara
    • 1
  • Shinichi Nakasuka
    • 2
  1. 1.ATR Human Informationrocessing Research Labs.Soraku-gunJapan
  2. 2.Univ. of TokyoBunkyo-kuJapan

Personalised recommendations