Springer Nature is making Coronavirus research free. View research | View latest news | Sign up for updates

Hold or roll: reaching the goal in jeopardy race games


We consider a class of dynamic tournaments in which two contestants are faced with a choice between two courses of action. The first is a riskless option (“hold”) of maintaining the resources the contestant already has accumulated in her turn and ceding the initiative to her rival. The second is the bolder option (“roll”) of taking the initiative of accumulating additional resources, and thereby moving ahead of her rival, while at the same time sustaining a risk of temporary setback. We study this tournament in the context of a jeopardy race game (JRG), extend the JRG to \(N > 2\) contestants, and construct its equilibrium solution. Compared to the equilibrium solution, the results of three experiments reveal a dysfunctional bias in favor of the riskless option. This bias is substantially mitigated when the contestants are required to commit in advance how long to pursue the risky course of action.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8


  1. 1.

    Neller and Presser (2005) report that for \(d_{\max }\) = 26 the optimal policy does not change.


  1. Axelrod, R. (1984). The evolution of cooperation. New York: Basic Books.

  2. Bearden, J. N., Murphy, R. O., & Rapoport, A. (2005). A multi-attribute extension of the secretary problem: Theory and experiments. Journal of Mathematical Psychology, 49, 410–422.

  3. Bearden, J. N., & Rapoport, A. (2005). Operations research in experimental psychology. In J. C. Smith (Ed.), Tutorials in operations research: Emerging theory, methods, and applications (pp. 213–236). Hanover, MD: INFORMS.

  4. Bearden, J. N., Rapoport, A., & Murphy, R. O. (2006). Sequential observation and selection with rank-dependent payoffs: An experimental test of alternative decision rules. Management Science, 52, 1437–1449.

  5. Berry, D., & Fristedt, B. (1985). Bandit problems. London: Chapman and Hall.

  6. Biele, G., Erev, I., & Ert, E. (2009). Learning risk attitude, and hot stoves in restless Bandit problems. Journal of Mathematical Psychology, 53, 155–167.

  7. Busemeyer, J. P., & Pleaskac, T. J. (2009). Theoretical tools for understanding and aiding dynamic decision making. Journal of Mathematical Psychology, 53, 126–138.

  8. Camerer, C. F. (1995). Individual decision making. In J. H. Kagel & A. E. Roth (Eds.), Handbook of experimental economics (pp. 587–703). Princeton, NJ: Princeton University Press.

  9. DeGroot, M. H. (1970). Optimal statistical decisions. New York: Wiley.

  10. Edwards, W. (1962). Dynamic decision theory and probabilistic information processing. Human Factors, 4, 59–73.

  11. Ferguson, T. S. (1989). Who solved the secretary problem? Statistical Science, 4, 282–296.

  12. Hey, J. D. (1981). Are optimal search rules reasonable? And vice versa? (And does it matter, anyway?). Journal of Economic Behavior and Organization, 2, 47–70.

  13. Hey, J. D. (1982). Search for rules of search. Journal of Economic Behavior and Organization, 3, 65–81.

  14. Hey, J. D., & Knoll, J. A. (2007). How far ahead do people plan? Economic Letters, 96, 8–13.

  15. Konrad, K. A. (2009). Strategy and dynamics in contests. New York: Oxford University Press.

  16. Lejuez, C. W., et al. (2002). Evaluation of a behavior measure of risk-taking: The Balloon Analogue Risk Task (BART). Journal of Experimental Psychology: Applied, 8, 75–84.

  17. Maner, J. K., Gailliot, M. T., Butz, D. A., & Peruche, B. M. (2007). Power, risk, and the status quo: Does power promote riskier or more conservative decision making? Personality and Social Psychology Bulletin, 33, 451–462.

  18. Neller, T. W., & Presser, C. G. M. (2004). Optimal play of the dice game Pig. The UMAP Journal, 25, 25–47.

  19. Neller, T. W., & Presser, C. G. M. (2005). Pigtail: A Pig Addendum. The UMAP Journal, 26, 443–458.

  20. Rabin, M., & Vayanos, D. (2010). The gambler’s and hot-hand fallacies: Theory and applications. The Review of Economic Studies, 77, 730–778.

  21. Rapoport, A. (1975). Research paradigms for studying dynamic decision behavior. In D. Wendt and C. Rapoport, A., Budescu, D. V. (1997). Randomization in individual choice behavior. Psychological Review, 104, 603–618.

  22. Rapoport, A., & Tversky, A. (1966). Cost and accessibility of offers as determinants of optimal stopping. Psychonomic Science, 4, 145–146.

  23. Rapoport, A., & Tversky, A. (1970). Choice behavior in an optimal stopping task. Organizational Behavior and Human Performance, 5, 105–120.

  24. Rapoport, A., & Wallsten, T. S. (1972). Individual decision behavior. Annual Review of Psychology, 23, 131–176.

  25. Scarne, J. (1945). Scarne on dice. Harrisburg, PA: Military Service Publishing Co.

  26. Seale, D. A., & Rapoport, A. (1997). Sequential decision making with relative ranks: An experimental investigation of the secretary problem. Organizational Behavior and Human Decision Processes, 69, 221–236.

  27. Seale, D. A., & Rapoport, A. (2000). Optimal stopping behavior with relative ranks: The secretary problem with unknown population size. Journal of Behavioral Decision Making, 13, 191–411.

  28. Stein, W. E., Seale, D. A., & Rapoport, A. (2003). Analysis of heuristic solutions to the best choice problem. European Journal of Operational Research, 151, 140–152.

  29. Tijms, H. (2012). Stochastic games and dynamic programming. Asian Pacific Mathematical Newsletter, 2, 6–10.

  30. Toda, M. (1962). The design of a fungus-eater: A model of human behavior in an unsophisticated environment. Behavioral Science, 7, 164–183.

  31. Wallsten, T. S., Pleskac, T. J., & Lejuez, C. W. (2005). Modeling behavior in clinically diagnostic sequential risk-taking task. Psychological Review, 112, 862–880.

Download references


We gratefully acknowledge financial support from the National Science Foundation (NSF Collaborative Research Project SES-0089182/SES-0114138). We are also grateful for the Associate Editor and two reviewers for constructive suggestions about related literature and data analysis.

Author information

Correspondence to Darryl A. Seale.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (docx 16 KB)


Appendix 1

Numerical solution of the equilibrium strategy for the 3-person JRG: the decision method

We are looking at each state from the viewpoint of each of the three players to obtain the values of\(P_\mathrm{A} ,P_\mathrm{B} ,P_\mathrm{C}\). Certain of these probabilities must be the same. Namely, at any stage the only thing that distinguishes one player from another is whether they control the die, or are the next in line to control the die or are the second in line. Therefore, \((i,j,k,t,\mathrm{A}),(k,i,j,t,\mathrm{B}),(j,k,i,t,\mathrm{C})\) are 3 states that can all be described the same way as: the player in control of the die has \(i\) points and \(t\) so far this turn while the next player has \(j\) points and the player after that has \(k\) points. Therefore,

$$\begin{aligned} P_\mathrm{A} (i,j,k,t,\mathrm{A})&= P_\mathrm{B} (k,i,j,t,\mathrm{B})=P_\mathrm{C} (j,k,i,t,\mathrm{C}) \nonumber \\ P_\mathrm{B} (i,j,k,t,\mathrm{A})&= P_\mathrm{C} (k,i,j,t,\mathrm{B})=P_\mathrm{A} (j,k,i,t,\mathrm{C}) \\ P_\mathrm{C} (i,j,k,t,\mathrm{A})&= P_\mathrm{A} (k,i,j,t,B)=P_\mathrm{B} (j,k,i,t,\mathrm{C}). \nonumber \end{aligned}$$

Note that (12) is not of the same structure as (2), which relates the three probabilities for a specific state s. Because of (2), we never need to explicitly compute \(P_\mathrm{C}\) for any state. We can also avoid all computations involving C when he is the player in control of the die by using (12). That is, \(P_\mathrm{A} (j,k,i,t,\mathrm{C})\) and \(P_\mathrm{B} (j,k,i,t,\mathrm{C})\) can be replaced using

$$\begin{aligned} \begin{aligned} P_\mathrm{A} (j,k,i,t,\mathrm{C})&= P_\mathrm{B} (i,j,k,t,\mathrm{A}) \\ P_\mathrm{B} (j,k,i,t,\mathrm{C})&= P_\mathrm{A} (k,i,j,t,\mathrm{B}). \end{aligned} \end{aligned}$$

Therefore, we have reduced the problem to recursive computations for players A and B. In fact from (7) we can also eliminate \(P_\mathrm{B} (-,-,-,-,\mathrm{B})\) and use only storage space to hold \(P_\mathrm{A} (-,-,-,-,\mathrm{A}), \quad P_\mathrm{A} (-,-,-,-,\mathrm{B}), \quad P_\mathrm{B} (-,-,-,-,\mathrm{A}).\) This requires three matrices of size \(M^{4}\). (With \(M=60\), that is almost 13 million entries per matrix.)

We now specify the initial conditions and boundary conditions. Assume that all matrices are first set to 0. In the following, assume \(i,j,k\) are all less than \(M\) unless otherwise stated. The boundary conditions are

  • If \(i+t\ge M\) then \(P_\mathrm{A} (i,j,k,t,\mathrm{A})=1\).

  • If \(i\ge M\) then \(P_\mathrm{A} (i,j,k,t,\mathrm{B})=1\).

  • If \(j\ge M\) then \(P_\mathrm{B} (i,j,k,t,\mathrm{A})=1\).

  • If \(i+t\ge M\) then \(P_\mathrm{A} (i,j,k,t,\mathrm{A})=1\).

Now, we start at state \(s=(i,j,k,t,\mathrm{A})\) and write out (3) and (4)

$$\begin{aligned} P_\mathrm{A} (s)&= \max \left\{ P_\mathrm{A} (i+t,j,k,0,\mathrm{B}),\left[ P_\mathrm{A} (i,j,k,0,\mathrm{B})\right. \right. \nonumber \\&\left. \left. +\sum _{m=2}^6 {P_\mathrm{A} (i,j,k,t+m,\mathrm{A})}\right] \Big /6\right\} . \end{aligned}$$

Note: (14) turns out to be the same equations as \(({4}^{\prime })\). This fills up the \(P_\mathrm{A} (-,-,-{,}-,\mathrm{A})\) matrix using \(P_\mathrm{A} (-,-,-,-,\mathrm{A})\) and \(P_\mathrm{A} (-,-,-,-,\mathrm{B})\) matrices. With the same state \(s\) we now use (5)

$$\begin{aligned} P_\mathrm{B} (s)= \left\{ \begin{array}{l@{\quad }l} P_\mathrm{B} (i+t,j,k,0,\mathrm{B}), &{} \hbox {if } \mathrm{A} \hbox { holds at } s \\ {[}P_\mathrm{B} (i,j,k,0,\mathrm{B})+\sum \nolimits _{m=2}^6 P_\mathrm{B} (i,j,k,t+m,\mathrm{A}){]}/6,&{} \hbox {if} \mathrm{A} \hbox { rolls at } s. \\ \end{array} \right. \end{aligned}$$

Rewrite this using (7)

$$\begin{aligned} P_\mathrm{B} (s)=\left\{ \begin{array}{l@{\quad }l} P_\mathrm{A} (j,k,i+t,0,\mathrm{A}), &{} \hbox {if } \mathrm{A} \hbox { holds at } s \\ {[}P_\mathrm{A} (j,k,i,0,\mathrm{A})&{}\\ \quad +\sum \nolimits _{m=2}^6 {P_\mathrm{B} (i,j,k,t+m,\mathrm{A})} {]}/6, &{} \hbox {if } \mathrm{A} \hbox { rolls at } s. \\ \end{array} \right. \end{aligned}$$

This fills up the \(P_\mathrm{B} (-,-,-,-,\mathrm{A})\) matrix using \(P_\mathrm{A} (-,-,-,-,\mathrm{A})\) and \(P_\mathrm{B} (-,-,-,-,\mathrm{A})\). Now consider (6) for the same state \(s=(i,j,k,t,\mathrm{A})\)

$$\begin{aligned} P_\mathrm{C} (s)=\left\{ \begin{array}{l@{\quad }l} P_\mathrm{C} (i+t,j,k,0,\mathrm{B}), &{} \hbox {if } \mathrm{A} \hbox { holds at } s \\ {[}P_\mathrm{C} (i,j,k,0,\mathrm{B})+\sum \nolimits _{m=2}^6 {P_\mathrm{C} (i,j,k,t+m,\mathrm{A})} {]}/6, &{} \hbox {if } \mathrm{A} \hbox { rolls at } s. \\ \end{array} \right. \end{aligned}$$

We may rewrite (7) as

$$\begin{aligned} P_\mathrm{A} (k,i,j,t,\mathrm{B})=\left\{ \begin{array}{l@{\quad }l} P_\mathrm{B} (j,k,i+t,0,\mathrm{A}), &{} \hbox {if }\mathrm{A} \hbox { holds at }s \\ {[}P_\mathrm{B} (j,k,i,0,\mathrm{A})&{}\\ \quad +\sum \nolimits _{m=2}^6 {P_\mathrm{A} (k,i,j,t+m,\mathrm{B})}{]}/6, &{} \hbox {if } \mathrm{A} \hbox { rolls at }s.\\ \end{array} \right. \end{aligned}$$

This fills up the \(P_\mathrm{A} (-,-,-,-,\mathrm{B})\) matrix using \(P_\mathrm{A} (-,-,-,-,\mathrm{B})\) and \(P_\mathrm{B} (-,-,-,-,\mathrm{A})\). Note that in (16) we have to check whether A holds or rolls at \(s=(i,j,k,t,\mathrm{A})\) and then update \(P_\mathrm{A} (k,i,j,t,\mathrm{B})\), not \(P_\mathrm{A} (i,j,k,t,\mathrm{B})\).

We now compute (14), (15), and (16). We need to set up space for the matrices \(P_\mathrm{A} (-,-,-,-,\mathrm{A}), P_\mathrm{A} (-,-,-,-,\mathrm{B}), P_\mathrm{B} (-,-,-,-,\mathrm{A})\) and a matrix of the same size to store the (current) strategy being used by player A. We can use “asynchronous updating” in this dynamic programming problem. This means the same three matrices appear in (14), (15), and (16) whether they are on the right or left side of the equations. This speeds up convergence and eliminates the need for a partitioning of the states as described in Neller and Presser (2004, p. 31). The probabilities were always increasing over the iterations so the terminating condition was chosen as making Eq. (2) for the starting state as close to 1 as desired: this required 25–40 iterations for 0.9999.

Convergence was fast for small \(M\) values from a few seconds up to 90 s for \(M=30\) to an hour for \(M=60\). The algorithm was programmed using MATLAB.

Appendix 2

Numerical solution for the equilibrium strategy of the 2-player JRG: the strategy method

On each turn, assume a finite maximum number of dice can be rolled from \(d\) to \(d_{\max }.\) Footnote 1 Using notation similar to Neller and Presser (2005), let \(\pi (d, t)\) represent the probability that rolling \(d\) dice \((0< d < d_{\max })\) results in a turn score of \(t \ge 0.\) Then,

$$\begin{aligned} \pi (d, t)=\left\{ \begin{array}{l@{\quad }l} 1/6, &{} d=1\;\mathrm{and}\;t\in \left\{ {0, 2,3,4,5,6} \right\} ;\\ 0, &{} d=1\;\mathrm{and}\;t\notin \left\{ {0,2,3,4,5,6} \right\} ;\\ \pi (d-1, 0)&{}\\ \quad +1/6 \sum \nolimits _{t=2}^{6(d-1)} \pi (d-1,t), &{} d>1\;\mathrm{and}\; t=0;\\ 1/6 \sum \nolimits _{r=2}^{\min (6,t-2)} \pi (d-1,t-r), &{} \mathrm{otherwise}. \\ \end{array} \right. \end{aligned}$$

Under the strategy method, the player is not deciding between roll or hold, rather, how many dice to roll during her turn. She chooses the number of dice \(d\) that maximizes here probability of winning. Let \(P_{i,j}\) represent the probability that the player wins at state \(i, j\). As in the standard JRG, if \(i \ge 100,\) then \(P_{i,j} = 1\) as the player has sufficient points to win the game. In other states where neither player has sufficient points to win the game \((0 \le i, j < 100),\) the probability that the player wins is given by

$$\begin{aligned} P_{i, j} = \mathop {\max }\limits _{0<d\le d_{\max }} \sum \limits _{t=0}^{6d} \pi (d,t)(1-P_{j,i+t}). \end{aligned}$$

Neller and Presser point out that the shape (or surface) of the equilibrium solution for the strategy method (when depicted as a three-dimensional plot on states \(i,j\) and dice d) is similar to that of the standard JRG plotted on states \(i, j, t\). If one was to multiply the equilibrium number of dice to roll by 4 (which yields the expected value of a successful turn), then this product approximates the roll/hold threshold values for the standard JRG.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Seale, D.A., Stein, W.E. & Rapoport, A. Hold or roll: reaching the goal in jeopardy race games. Theory Decis 76, 419–450 (2014).

Download citation


  • Dynamic decision making
  • Jeopardy race game
  • Equilibrium solutions
  • Experiment