ADABA: An Algorithm to Improve the Parallel Search in Competitive Agents

  • Lídia Bononi Paiva TomazEmail author
  • Rita Maria Silva Julia
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 940)


Distributed approaches of the Alpha-Beta search algorithm are commonly used as decision-making resources in agents designed for operating in competitive environments involving elevated state spaces. In this context, the present paper proposes a new master-slave based distributed version of Alpha-Beta, named ADABA (Asynchronous Distributed Alpha-Beta Algorithm), which is inspired upon the relevant Alpha-Beta distributed version known as APHID (Asynchronous Parallel Hierarchical Iterative Deepening). ADABA aims at enhancing the APHID’s performance in the following ways: establishing a new policy for ordering the tasks that are executed by the slaves; and including a thread pool dedicated to improving the operation of the master processor in regards to the results returned by the slave processors. ADABA’s performance is validated by means of tournaments involving two Checkers agents trained by Reinforcement Learning: the ADABA-based Checkers player named ADABA-Draughts, implemented in this paper, and the well successful APHID-based agent called APHID-Draughts.


Alpha-Beta search Asynchronous distributed search algorithms Automatic player agents Checkers game 


  1. 1.
    Brockington, M.G.: Asynchronous parallel game-tree search. Ph.D. thesis, University of Alberta, Edmonton, Alta., Canada (1998)Google Scholar
  2. 2.
    Feldman, R., Monien, B., Mysliwietz, P., Vornberger, O.: Distributed game tree search. In: Kumar, V., Gopalakrishnan, P.S., Kanal, L.N. (eds.) Parallel Algorithms for Machine Intelligence and Vision, pp. 66–101. Springer, New York (1990)Google Scholar
  3. 3.
    Feldmann, R.: Game tree search on massively parallel systems. Ph.D. thesis, Department of Mathematics and Computer Science of University of Paderborn (1993)Google Scholar
  4. 4.
    Hyatt, R.M.: A high performance parallel algorithm to search depth-first game trees. Ph.D. thesis, The University of Alabama at Birmingham (1988). aAI8909785Google Scholar
  5. 5.
    Marsland, T.A., Campbell, M.: Parallel search of strongly ordered game trees. ACM Comput. Surv. 14(4), 533–551 (1982)CrossRefGoogle Scholar
  6. 6.
    Newborn, M.: Unsynchronized iteratively deepening parallel alpha-beta search. IEEE Trans. Pattern Anal. Mach. Intell. 10(5), 687–694 (1988)CrossRefGoogle Scholar
  7. 7.
    Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach, 2nd edn. Prentice Hall, Upper Saddle River (2003)zbMATHGoogle Scholar
  8. 8.
    Sutton, R.S.: Learning to predict by the methods of temporal differences. Mach. Learn. 3(1), 9–44 (1988)Google Scholar
  9. 9.
    Tomaz, L.B.P., Julia, R.M.S., Duarte, V.A.: A multiagent player system composed by expert agents in specific game stages operating in high performance environment. Appl. Intell. (2017)Google Scholar
  10. 10.
    Tomaz, L.B.P., Julia, R.M.S., Faria, M.P.P.: APHID-Draughts: comparing the synchronous and asynchronous parallelism approaches for the alpha-beta algorithm applied to checkers. In: International Conference on Tools with Artificial Intelligence (2017)Google Scholar
  11. 11.
    Tomaz, L.B.P., Julia, R.S., Barcelos, A.R.A.: Improving the accomplishment of a neural network based agent for draughts that operates in a distributed learning environment. In: IEEE 14th International Conference on Information Reuse and Integration (2013)Google Scholar
  12. 12.
    Ura, A., Tsuruoka, Y., Chikayama, T.: Dynamic prediction of minimal trees in large-scale parallel game tree search. J. Inf. Process. 23(1), 9–19 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Federal University of UberlândiaUberlândiaBrazil
  2. 2.Federal Institute of Triângulo MineiroUberabaBrazil

Personalised recommendations