1 Introduction

The analysis of heterogeneity in decision making has a long tradition in experimental economics. Stahl and Wilson (1994, 1995) pioneered the use of finite-mixture models to study the decision strategies of participants in economic experiments. In an influential paper, Dal Bó and Fréchette (2011) estimated the frequencies of a set of candidate strategies to explain participants’ choices in a repeated prisoner’s dilemma experiment. In recent years, strategy frequency estimation has become increasingly popular in experimental economics and several model extensions have been proposed.Footnote 1

stratEst is a software package for the freely available statistical computing environment R (R Development Core Team, 2022) that significantly reduces the start-up costs of performing strategy frequency estimation. Programming strategy frequency estimation code from scratch usually requires considerable effort on the part of the analyst. Before model parameter optimization routines can be used, code must be written to compute the probability that a given sequence of decisions is generated by a particular strategy. Since each candidate strategy is different, this task must be performed for each strategy, which can be tedious, especially if the set of candidate strategies is large and the strategies are complex. An additional problem is the difficulty of adapting strategy estimation code to other data; the close correspondence between candidate strategies and data usually requires substantial revision of the code.

The stratEst package allows strategies to be generated, stored, and adjusted without the need for strategy-specific code to calculate the probability of certain decisions. Using the stratEst strategy generation function, the analyst can conveniently create customized strategies with little effort. A guiding principle of the package is that strategies are Markov strategies, represented as finite-state automata, and stored as dataframe-like objects that can be reloaded and adapted for later use. In the automaton representation, choice probabilities are determined by the internal state of the automaton, not by the history of the game. This guarantees that strategies are concisely represented even when the number of game histories is large or potentially infinite.

The simplicity of the automaton representation facilitates the programming and organization of strategies but also makes it easy to adapt existing strategies to other data. At the same time, it does not limit the complexity of strategies. Finite-state automata can mimic complex patterns of behavior based on deterministic sequences of state transitions triggered by inputs from the choice environment. It is important to note that the determinism of the automata concerns only the transitions between states, not the choices of the strategy. This means that it is possible to generate behavior strategies and define (or alternatively estimate) their state-specific choice probabilities.

Another potential obstacle for the analyst who wants to perform strategy estimation is transforming the data into a format suitable for analysis. The package includes a function to create the inputs for the strategies to facilitate the transformation of the data. This makes it easy to perform strategy estimation on a wide variety of data, and can sometimes be used to analyze new data with just a few lines of code. The package includes a number of helpful functions for data processing and simulation, parameter testing, model checking, and model selection, further reducing the start-up costs of performing strategy estimation.

The estimation function of the package returns the maximum likelihood parameters of a strategy estimation model based on the expectation-maximization algorithm (Dempster et al., 1977) and the Newton–Raphson method. The package speeds up the estimation procedure by integrating C++ and R with the help of the R packages Rcpp (Eddelbuettel & François, 2011) and the open-source linear algebra library for the C++ language RppArmadillo (Sanderson & Curtin, 2016). Package development is supported by the R packages devtools (Wickham et al., 2020), testthat (Wickham, 2011), roxygen2 (Wickham et al., 2020), and Sweave (Leisch, 2002). The strategies are plotted with the packages DiagrammeR (Iannone, 2020) and DiagrammeRsvg (Iannone, 2016).

The purpose of this paper is to introduce the scope and general principles of the stratEst package. The detailed package vignette is available on the author’s website.Footnote 2 The package is available for download from the Comprehensive R Archive Network and is continuously tested for functionality on Windows, MacOS, and Linux.Footnote 3 Non-commercial use of stratEst is free of charge. However, the author kindly asks all users of the package to cite this article in publications or presentations of their research.

2 A motivating example

This example illustrates how the package can be used to replicate the results of the influential strategy estimation study by Dal Bó and Fréchette (2011). The study examines the evolution of cooperation in the indefinitely repeated prisoner’s dilemma across six experimental treatments. The six treatments differ in the reward offered for mutual cooperation R and the continuation probability \(\delta \) of the repeated game. The stage game is shown in Fig. 1. The parameter R is either 32, 40, or 48. For each value of R, there are two treatments with continuation probabilities \(\delta \) of 1/2 or 3/4, resulting in a \(2 \times 3\) between-subjects design with six treatments.

Fig. 1
figure 1

Stage game of Dal Bó and Fréchette (2011). The stage game features two choices, cooperation (c) and defection (d). R varies across experimental treatments and is either 32, 40, or 48

To follow along in R, all commands in italics after the command prompt R> can be executed in the R console. The complete code for generating all the output and figures presented below is also available as supplementary material. The following two commands will install the latest CRAN version of the stratEst package and load it into memory:

figure a

Strategies

Dal Bó and Fréchette (2011) fit the same strategy estimation model to the data of each treatment. This model features six strategies: Always Defect (ALLD), Always Cooperate (ALLC), Tit-For-Tat (TFT), Grim-Trigger (GRIM), Win-Stay-Lose-Shift (WSLS), and a trigger strategy with two punishment periods (T2). The package includes the predefined list strategies.DF2011, which contains the six strategies used by Dal Bó and Fréchette (2011). Each element of this list represents a strategy, encoded as a finite-state automaton.

The left panel of Fig. 2 shows the result of printing the element TFT of the list strategies.DF2011 to the R console. The strategy TFT is a finite-state automaton with two states, represented by the two rows of the printed object. In each state, the strategy defects or cooperates, with probabilities defined in the first two columns prob.d and prob.c. Since these probabilities are either zero or one, there is a column tremble for each state. The tremble probabilities define the probability of the action not predicted in this state. For the strategy TFT (and all other strategies in the list strategies.DF2011), the tremble probabilities are not available (NA), which tells the estimation function that these probabilities should be estimated from the data.

Fig. 2
figure 2

The strategy Tit-For-Tat. Left part shows the strategy TFT printed to the R console. Rows represent the two states of the automaton. Columns show the defection, cooperation and tremble probabilities in each state, as well as the deterministic state transitions between states. Right part shows the graphical representation of TFT. States are depicted as nodes, deterministic state transitions as arrows between nodes. Colors indicate the predicted action in each state

The remaining four columns define a matrix of deterministic state transitions triggered by four different strategy inputs in the two different states. Since the data come from a repeated prisoner’s dilemma, the four inputs cc, cd, dc, and dd reflect the four possible combinations of one’s own action and the action of the other player in the previous period.

All state transitions must be integers that indicate the future state of the strategy after receiving the input. The interpretation of these transitions is as follows: the value 1 in row one and column tr(cc) means that whenever the strategy is in the first state (row one) and the input in the current period is cc, the strategy will remain in that first state, and the players will cooperate if no tremble occurs in the period. If the input is cd instead, the strategy will transition to the second state and cooperate only if a tremble occurs in that period. By definition, the first row is always the start state of the automaton in the first period; this can also be used to specify a particular behavior in the first period.Footnote 4

Understanding the behavior of complex strategies based on a matrix of deterministic state transitions can be difficult. A more convenient way to examine the behavior associated with the strategy TFT is to plot the strategy. The right panel of Fig. 2 shows the results of plotting the strategy with the function plot().Footnote 5 In the graphical representation, each state is represented by a node, and arrows indicate the deterministic state transitions triggered by the different inputs. Different colors are used to indicate the predicted action in each state. The graphical representation of TFT makes it fairly easy to understand the behavior of the strategy.

Data

To fit the strategies to the data, the stratEst package requires data in the long format with one row for each decision. The object DF2011 is loaded with the package and contains the data used in Dal Bó and Fréchette (2011). The data set has three columns named id, game, and period, with integers that uniquely identify the subject and indicate the order of the games and the periods in each game. The data set also includes a column named choice, which contains the individual’s choice encoded as a factor with two levels (c and d), and a column named other.choice, which identifies the choice of their partner in the same period. Readers can also use their own data, in the format discussed, and follow along from here.

What is missing in the data set DF2011 is a column with the strategy inputs received at the beginning of each period that trigger the deterministic state transitions. The inputs are the crucial information that allows the package’s estimation function to determine the current state of each strategy for each observation in the data. The data function of the package can be used to facilitate the generation of the input variable.

figure b

The options input = c("choice", "other.choice") and input.lag = 1 create the input variable by concatenating the players’ choices in the preceding period. The generated object data.DF2011 contains all the information necessary for fitting the strategies. The levels c and d of the variable choice correspond to the choice probabilities prob.c and prob.d in the first two columns of each strategy object. The levels of the variable input cc, cd, dc and dd correspond to columns tr(cc), tr(cd), tr(dc), and tr(dd), respectively. The input is not available (NA) in the first period because a lag of one period was used. Whenever the input is unavailable, the strategy will revert to the first state, which is, by definition, the start state of the automaton.

Model fitting

The command below replicates the strategy estimation results reported by Dal Bó and Fréchette (2011):

figure c

Choosing the option sample.id = "treatment" estimates a model with treatment-specific parameters. This means that, for each treatment in the data, one vector of shares and one tremble parameter is estimated. The command summary(model.DF2011) prints a summary of the fitted model to the R console. The estimated shares are the strategy shares reported in Table 7 on page 424 of Dal Bó and Fréchette (2011). The treatment-specific parameters of the fitted model can also be accessed separately. For example, the strategy shares for the data of the treatment with \(\delta = 0.5\) and \(R = 32\) rounded to two digits are

figure d

The fitted strategies can plotted, printed to the console or stored for later use. For example, the TFT strategy fitted to the data of the treatment with \(\delta = 0.5\) and \(R = 32\) looks like this:

figure e

The maximum likelihood estimate of the treatment-specific tremble probability implies that the fitted TFT strategy randomly selects the action not predicted with a probability of 6%. Accounting for trembles, the effective cooperation probabilities in the two states are 0.94 and 0.06.

Parameter estimates and standard errors

The estimated parameters and standard errors of a fitted model are stored in objects with the extensions .par and .se. The estimated shares of the fitted model model.DF2011 can be inspected with the command print(model.DF2011$shares.par). Perhaps somewhat surprisingly, the object model.DF2011$shares.par does not indicate which parameter belongs to which strategy. The reason for this is that restricted model specifications can be estimated in which the shares of some strategies are determined by the same share parameter. Another option is to define strategy shares that are not estimated from the data. These possibilities preclude a one-to-one mapping of estimated share parameters and strategies.

The object shares.indices can be used to find the share parameter of a certain strategy. For example, the code below retrieves the estimated share of ALLD for the data of the first treatment. The same logic can be used to retrieve the estimated parameters and standard errors of trembles and response probabilities.

figure f

By default, the standard errors of the parameters and the quantiles of their sampling distribution are obtained using the empirical observed information matrix (Meilijson, 1989). The estimation based on the empirical observed information matrix creates little computational overhead. However, the method may produce downward biased standard errors for parameters close to the boundary of the parameter space. For statistical testing, it is, therefore, recommended to estimate standard errors using a nonparameteric block-bootstrap. The bock-bootstrap procedure takes the dependence of choices made by participants with the same id into account. The following code illustrates how block-bootstrapped standard errors are obtained. To keep the computation time short, we only use the data from the first treatment.

figure g

Bootstrapping produces the same standard error of the share of ALLD in the first treatment. The estimated quantiles of the sampling distribution are

figure h

Adaptation

A key contribution of the package is that it allows users to perform different variants of strategy estimation with little effort. Some examples are given below. Perhaps most importantly, the package allows users to build and maintain an archive of customized strategies. For example, the following code generates the strategy known as Semi-Grim (Breitmoser, 2015; Backhaus & Breitmoser, 2018):

figure i
Fig. 3
figure 3

The strategy Semi-Grim. Left part shows the strategy SGRIM printed to the R console. Rows represent the three states of the automaton. Columns show the defection, cooperation and tremble probabilities in each state, as well as the deterministic state transitions between states. Right part shows the graphic representation of SGRIM. States are depicted as nodes, deterministic state transitions as arrows between nodes. Colors indicate the predicted action in each state

The argument choices made available using the strategy generation function can be used to specify the choice alternatives of the strategy. This creates the columns prob.d and prob.c in the left panel of Fig. 3. The specified inputs create the columns with the state transitions tr(cc)-tr(dd). The values passed to prob.choices are filled row by row into the columns prob.d and prob.c. The result is that the strategy will have players cooperating in the first state and defecting in the third state. The use of NA for the choice probabilities after the inputs cd and dc indicates that these are the parameters that should be estimated from the data. The argument tr.inputs is used to specify the deterministic state transitions of SGRIM. The transitions must be integers between one and the total number of states, which is defined by the argument num.states. Since the integers are also filled into the columns row by row, we can replicate the vector c(1,2,2,3) three times to generate transitions that do not depend on the current state. The right panel of Fig. 3 shows the plotted results of the strategy SGRIM.

Below are examples of various adaptations of the strategy estimation model available to analysts:

  • Adjust the set of candidate strategies, adding the behavior strategy SGRIM, and estimate its cooperation probability after histories cd and dc from the data.

    figure j
  • Select a subset of the strategies that provide the best explanation of the data according to the Bayesian information criterion (Schwarz, 1978).

    figure k
  • Estimate the overall strategy shares by pooling the data of all treatments, keeping the tremble probabilities treatment specific.

    figure l
  • Fix selected model parameters, like the tremble probabilities of TFT, mixed cooperation probabilities of SGRIM, and the strategy shares.

    figure m
  • Transform the data. For example, imagine that the data set DF2011 contains second-mover decisions of a sequential game; second-mover strategies should react to the player’s own action in the previous period and the action of the first mover in the current period.

    figure n

3 Workflow

The core of the stratEst package is a collection of functions for strategy generation, data processing and simulation, model fitting, parameter testing, and model checking. Figure 4 outlines the recommended workflow when using the package and highlights the functions involved in each step. A detailed description of each function can be found in the package’s R documentation or, alternatively, in the package vignette.

In the first step, the user collects or creates a set of candidate strategies based on prior knowledge or theoretical considerations. All strategies must assign probabilities to the action space observed in the data and respond to the same set of inputs. Thus, identifying a set of common inputs is often the initial task for the analyst and is ideally guided by theory.

As a second step, it is generally useful to simulate data for the set of candidate strategies. By fitting the correct model to simulated data, the analyst can verify that all model parameters are recovered. These checks are generally recommended because the parameters of the mixture model may not be identified. For example, it is not possible to recover the shares of a mixture of the grim trigger strategy and a strategy that always cooperates in the repeated prisoner’s dilemma if both strategies are error free. When working with more complex strategies, identification problems may arise that are much harder to anticipate but easy to detect using simulated data.

If all the model parameters can be recovered from the simulated data, the analyst can proceed to the third step of preparing the experimental data and fitting the model. After the estimation, the fitted model should be tested for misspecification.

Fig. 4
figure 4

stratEst workflow

4 Limitations and future development

The current version of the package has several limitations. First, the action space of all strategies must be discrete. Choices are modeled as independent draws from a multinomial distribution defined by the strategy’s state-specific choice probabilities; that is, it is not possible to estimate strategies with continuous choices. Another limitation is that the state transitions must be deterministic and specified by the user. This excludes the possibility of fitting Markov strategies with probabilistic state transitions (Hidden Markov Models) or estimating state transitions from data.

The package will be further developed to address its limitations. One restriction that will be relaxed in future versions of the package is that all model strategies must respond to the same set of inputs. For example, for data from the prisoner’s dilemma, it is typically assumed that all candidate strategies are responses to the players’ actions in the previous period. Thus, all strategies are automata reacting to five inputs, the four possible combinations of actions in the last round and the empty history in the first period. In most cases, it will be possible to represent all model strategies as automata that satisfy this restriction. However, the representation of some strategies may be unnecessarily complex in this case, making strategy programming more difficult than it should be.

5 Conclusions

This article introduces the R software package stratEst, for strategy frequency estimation. The stratEst package provides a free and easy-to-use framework for performing the modern strategy frequency estimation techniques used in experimental economics. The estimation function of the package fits a finite-mixture model of customized individual choice strategies and returns several values, including maximum likelihood estimates and standard errors of all model parameters. The package also includes several helpful functions to facilitate strategy programming, data processing and data simulation, model selection, and model checking, as well as model checks and statistical tests of fitted model parameters.