Abstract
After introducing the jeopardy card game Fowl Play, we present equations for optimal two-player play, describe their solution with a variant of value iteration, and visualize the optimal play policy. Next, we discuss the approximation of optimal play and note that neural network learning can achieve a win rate within 1 % of optimal play yet with a 5-orders-of-magnitude reduction in memory requirements. Optimal komi (i.e., compensation points) are computed for the two-player games of Pig and Fowl Play. Finally, we make use of such komi computations in order to redesign Fowl Play for two-player fairness, creating the game Red Light.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Although a turn’s initial draw requirement is not clearly or explicitly stated, the rules seem to imply the requirement, and it is necessary to avoid stalemate. Although the rules state “You can stop counting and collect your points at any time as long as you don’t turn over a wolf!”, there is an implication that one has started counting chickens/points. Consider the scenario where players have scores tied at 49 and the deck contains a single wolf card. It is in neither player’s interest to draw the wolf card, so rational players would infinitely hold as a first action if permitted.
- 2.
Komi is a Japanese Go term, short for “komidashi”.
- 3.
For players with red/green color-blindness, we recommend use of yellow or light green chips for sufficient contrast.
- 4.
i.e., fair after first-player determination.
References
Bellman, R.E.: Dynamic Programming. Princeton University Press, Princeton (1957)
Bertsekas, D.P.: Dynamic Programming: Deterministic and Stochastic Models. Prentice-Hall, Upper Saddle River (1987)
Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, New York (1995)
Neller, T.W., Presser, C.G.M.: Optimal play of the dice game Pig. UMAP J. 25(1), 25–47 (2004)
Neller, T.W., Presser, C.G.M.: Pigtail: a Pig addendum. UMAP J. 26(4), 443–458 (2005)
Neller, T.W., Presser, C.G.M.: Practical play of the dice game Pig. UMAP J. 31(1), 5–19 (2010)
Nguyen, D., Widrow, B.: Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights. In: International Joint Conference on Neural Networks, 1990 IJCNN , vol. 3, pp. 21–26, June 1990
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Neller, T.W., Malec, M., Presser, C.G.M., Jacobs, F. (2014). Optimal, Approximately Optimal, and Fair Play of the Fowl Play Card Game. In: van den Herik, H., Iida, H., Plaat, A. (eds) Computers and Games. CG 2013. Lecture Notes in Computer Science(), vol 8427. Springer, Cham. https://doi.org/10.1007/978-3-319-09165-5_20
Download citation
DOI: https://doi.org/10.1007/978-3-319-09165-5_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-09164-8
Online ISBN: 978-3-319-09165-5
eBook Packages: Computer ScienceComputer Science (R0)