Advertisement

Comparison and Analysis of Expertness Measure in Knowledge Sharing Among Robots

  • Panrasee Ritthipravat
  • Thavida Maneewarn
  • Jeremy Wyatt
  • Djitt Laowattana
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4031)

Abstract

Robot expertness measures are used to improve learning performance of knowledge sharing techniques. In this paper, several fuzzy Q-learning methods for knowledge sharing i.e. Shared Memory, Weighted Strategy Sharing (WSS) and Adaptive Weighted Strategy Sharing (AdpWSS) are studied. A new measure of expertise based on regret evaluation is proposed. Regret measure takes uncertainty bounds of two best actions, i.e. the greedy action and the second best action into account. Knowledge sharing simulations and experiments on real robots were performed to compare the effectiveness of the three expertness measures i.e. Gradient (G), Average Move (AM) and our proposed measure. The proposed measure exhibited the best performance among the three measures. Moreover, our measure that is applied to the AdpWSS does not require the predefined setting of cooperative time, thus it is more practical to be implemented in real-world problems.

Keywords

Average Move Knowledge Sharing Real Robot Learning Trial Reward Rate 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Tan, M.: Multi-agent reinforcement learning: Independent vs cooperative agents. In: Proc. 10th Int. Conf. Machine Learning (1993)Google Scholar
  2. 2.
    Ahmadabadi, M.N., Asadpour, M.: Expertness Based Cooperative Q-Learning. IEEE Trans. SMC–Part B 32(1), 66–76 (2002)Google Scholar
  3. 3.
    Bitaghsir, A.A., Moghimi, A., Lesani, M., Keramati, M.M., Ahmadabadi, M.N., Arabi, B.N.: Successful Cooperation between Heterogeneous Fuzzy Q-learning Agents. In: IEEE Int. Conf. SMC (2004)Google Scholar
  4. 4.
    Dixon, K.R., Malak, R.J., Khosla, P.K.: Incorporating Prior Knowledge and Previously Learned Information into Reinforcement Learning Agents. Tech. report. Institute for Complex Engineered Systems, Carnegie Mellon University (2000)Google Scholar
  5. 5.
    Moreno, D.L., Regueiro, C.B., Iglesias, R., Barro, S.: Using Prior Knowledge to Improve Reinforcement Learning in Mobile Robotics. In: Proc. Towards Autonomous Robotics Systems. Univ. of Essex, UK (2004)Google Scholar
  6. 6.
    Yamaguchi, T., Tanaka, Y., Yachida, M.: Speed up Reinforcement Learning between Two Agents with Adaptive Mimetism. In: IEEE Int. Conf. IROS, pp. 594–600 (1997)Google Scholar
  7. 7.
    Ritthipravat, P., Maneewarn, T., Laowattana, D., Wyatt, J.: A Modified Approach to Fuzzy Q-Learning for Mobile Robots. In: Proc. IEEE Int. Conf. SMC (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Panrasee Ritthipravat
    • 1
  • Thavida Maneewarn
    • 1
  • Jeremy Wyatt
    • 2
  • Djitt Laowattana
    • 1
  1. 1.FIBO, King Mongkut’s University of Technology ThonburiThailand
  2. 2.School of Computer ScienceUniversity of BirminghamUnited Kingdom

Personalised recommendations