Advertisement

A Definition Approach for an “Emotional Turing Test”

  • Dirk M. Reichardt
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4738)

Abstract

There are lots of modelling approaches for emotional agents. Can they be compared in any way? The intention of this work is to provide a basis for comparison in a small but consistent environment which focuses on the impact of emotions in the decision making of agents. We chose the public goods game with punishment option as a scenario. Why? In this scenario it has been proven that humans show emotional, non-rational reactions. An emotional agent should therefore be able to show the same emotions and the underlying models should be capable of explaining them! The simulation and test environment is designed to allow any emotional agent model. Eventually, human players should not be distinguishable from artificial emotional agents.

Keywords

Test Environment Public Good Game Public Project Turing Test Altruistic Punishment 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Turing, A.: Computing Machinery and Intelligence. Mind LIX(236), 433–460 (1950)CrossRefMathSciNetGoogle Scholar
  2. 2.
    Fehr, E., Gächter, S.: Altruistic Punishment in Humans. Nature 415, 137–140 (2002)CrossRefGoogle Scholar
  3. 3.
    Reichardt, D.: Will Artificial Emotional Agents Show Altruistic Punishment In The Public Goods Game. In: Reichardt, D., Levi, P., Meyer, J.-J. (eds.) Proceedings of the 1st Workshop Emotion and Computing – Current Research and Future Impact. 29th Annual German Conference on Artificial Intelligence, Bremen (2006), (ISBN 3-88722-664-X)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Dirk M. Reichardt
    • 1
  1. 1.BA Stuttgart - University of Cooperative Education, D-70180 StuttgartGermany

Personalised recommendations