Applications of Choquet expected utility to hypothesis testing with incompleteness

The Maximin and Choquet expected utility theories guide decision-making under ambiguity. We apply them to hypothesis testing in incomplete models. We consider a statistical risk function that uses a prior probability to incorporate parameter uncertainty and a belief function to reflect the decision-maker’s willingness to be robust against the model’s incompleteness. We develop a numerical method to implement a test that minimizes the risk function. We also use a sequence of such tests to approximate a minimax optimal test when a nuisance parameter is present under the null hypothesis.


Introduction
Economic models may make incomplete predictions when the researcher works with weak assumptions.For example, in a discrete game, the researcher may not know the precise form of the information the players have access to, even if the researcher's theory may fully describe the primitives of the game.Or the theory may be silent about how an equilibrium gets selected when multiple equilibria exist.In such settings, the model's prediction is essentially the set of outcome values the incomplete theory allows.This paper considers models with set-valued predictions of the following form:

Statistical Decision Theory and Treatment Choice
where Y is an outcome variable, G is a weakly measurable correspondence that maps the observable and unobservable characteristics (X, U) and a parameter to a set of permissible outcome values.This structure is commonly seen across different areas of applications.Examples include discrete choices with an unknown choice set, dynamic discrete choice models, matching, network formation, and voting models (see Molinari, 2020, and references therein).This paper considers a statistical decision problem in such an environment.
The researcher is willing to choose an optimal action based on data and their incomplete theory.A noteworthy feature is that the researcher faces three types of uncertainty: sampling, parameter, and "selection".The first two are standard in the statistical decision theory (Wald, 1939(Wald, , 1945)).As such, we treat them using Wald's framework.The selection uncertainty is specific to incomplete models.The theory does not provide further information on how Y is selected from G. A possible approach is to parameterize the selection and treat it as a subcomponent of the parameter vector.Existing papers adopt parametric and nonparametric versions of this "completion" approach. 1While the approach makes the problem tractable, it is not always straightforward to parameterize the selection because it often represents complex objects such as information structure in games (Magnolfi & Roncoroni, 2022), an initial condition in dynamic choices (Honoré & Tamer, 2006), unobserved choice set formation processes (Barseghyan et al., 2021), and so on.This paper considers a complementary robust approach.We construct robust statistical decision rules by borrowing insights from the Maximin and Choquet expected utility theories and their applications to incomplete models.In particular, the belief function expected utility allows us to encode the researcher's lack of understanding of the selection.This approach is in line with the econometrics literature (Manski, 2003;Tamer, 2010;Molinari, 2020) that seeks partial identification and inference methods that do not require further assumptions on the incomplete parts of the model.
We define a statistical risk that incorporates the three types of uncertainty.We further demonstrate the key components of the statistical risk can be represented by the Choquet integrals of tests with respect to a belief function and its conjugate.Belief functions are Choquet capacities (or non-additive measures) that can capture a decision-maker's perception of uncertainty when she cannot assign a unique probability but can assign an interval-valued assessment.They were introduced to the statistics literature by Dempster (1967) and Shafer (1982) as functions that summarize the "weight of evidence" for events.The belief function is also called a containment functional in the econometrics literature as it represents the law of the random set G(U|X; ) by the probability of G being contained in events of interest.It is a crucial tool for encoding the empirical contents of the model into sharp identifying restrictions on the underlying parameter (Galichon & Henry, 2006, 2011;Beresteanu et al., 2011;Chesher & Rosen, 2017).

3
The Japanese Economic Review (2023) 74:551-572 This paper builds on Kaido and Zhang (2019), who develop a hypothesis-testing framework for incomplete models.Using their framework, we define a statistical risk function that evaluates the performance of a test through the Choquet integral of a loss function.The criterion evaluates a test's performance based on size and power guarantee.The latter is the value of power that is certain to realize regardless of an unknown selection mechanism.We then define an analog of the Bayes risk and call it a Bayes-Dempster-Shafer (BDS) risk because it averages a loss function over unknown parameters using a single prior distribution and integrates over unknown selection mechanisms using a belief function.
Building on Huber and Strassen (1973), Kaido and Zhang (2019) showed an optimal statistical decision rule, BDS test, was a likelihood-ratio test based on the least-favorable pair of distributions.This paper develops a numerical method to implement the BDS test.The proposed algorithm is also useful for constructing a sequence of tests that can approximate a global minimax test.In an important earlier work, Chamberlain (2000) studied applications of the maximin utility to econometric problems and developed numerical methods.Following his work, we develop an algorithm to construct a sequence of BDS tests that approximate the minimax test.Our formulation combines well-defined computational problems: convex (or concave) programming and a one-dimensional root-finding.Efficient solvers are available for these problems.
As argued earlier, the empirical contents of an incomplete model can be summarized by the sharp identifying restrictions characterizing the core of a belief function.A well-developed literature on inference based on such restrictions exists.Asymptotically valid tests and confidence regions are extensively studied (see Canay & Shaikh, 2017, and references therein).Specifically, this paper's test and recent developments on sub-vector inference tackle testing restrictions on subcomponents of the parameter vector (Bugni et al., 2017;Kaido et al., 2019).The existing inference methods construct test statistics based on sample moment inequalities.They compute asymptotically valid critical values by bootstrapping the test statistics combined with regularization methods to ensure their asymptotic validity (see e.g., Andrews & Soares, 2010;Andrews & Barwick, 2012).This approach can be computationally costly as it typically involves solving optimization problems over bootstrapped samples.In contrast, our procedure's critical value has a closed form.However, it requires us to solve optimization problems to construct a test statistic.We combine well-defined computational problems to mitigate its cost.Bayesian and quasi-Bayesian approaches are also considered in the recent work of Chen et al. (2018); Giacomini and Kitagawa (2021).Belief functions are also used extensively in decision theory (see Mukerji, 1997, Ghirardato, 2001, Epstein et al., 2007, Gul & Pesendorfer, 2014).In a general setting where the sample space describing repeated experiments defines payoff-relevant states, Epstein and Seo (2015) gave behavioral foundations for the type of preference (or risk) considered in this paper.Finally, we note that one may use this paper's framework regardless of the underlying parameter being point identified or not, i.e., partially identified.An essential requirement is that the model has the structure in (1).Other statistical decision frameworks in the presence of partial identification are studied in Hirano and Porter (2012), Song (2014), and Christensen et al. (2022).
The rest of the paper is organized as follows.Section 2 sets up an incomplete model and discusses a motivating example.Section 3 provides a theoretical background behind hypothesis testing and reviews results in Kaido and Zhang (2019).Section 4 presents numerical algorithms to implement the BDS test and to approximate a minimax test.

Set-up
Let Y ∈ Y ⊆ ℝ d Y and X ∈ X ⊆ ℝ d X denote, respectively, observable outcome and covariates, and let U ∈ U ⊆ ℝ d U represent latent variables.We let S ≡ Y × X and assume S is a finite set.This assumption still accommodates a wide range of discrete choice models.Relaxing X 's finiteness is possible, but we work in a simple setting to highlight key conceptual issues.2For any metric space A , we let Δ(A) denote the set of all Borel probability measures on (A, Σ A ) , and Σ A is the Borel -algebra on A .Let Θ ⊆ ℝ d be a finite-dimensional parameter space.We first illustrate an incom- plete structure with an example.
Example 1 (Discrete games of strategic substitution) Consider a two-player game.Each player may either choose y (k) = 0 or y (k) = 1 .The payoff of player j is where y (−k) ∈ {0, 1} denotes the other player's action, x (k) is player k's observable characteristics, and u (k) is an unobservable payoff shifter.The payoff is summarized in the table below and is assumed to belong to the players' common knowledge.
Player 2 The strategic interaction effect (k) captures the impact of the opponent's taking y (−k) = 1 on player k's payoff.Suppose that (k) ≤ 0 for both players and the players play a pure strategy Nash equilibrium (PSNE).Then, one can summarize the set of PSNEs by the following correspondence (Tamer, 2003): The Japanese Economic Review (2023) 74:551-572 where = ( � , � ) � , and .
Recall that, conditional on X = x , the outcome takes values in G(U|x; ) .We summarize the model's prediction by the following correspondence from X × U × Θ to S: The map Γ collects the values of the observable variables compatible with (x, u, ) .Let P be the joint distribution of (Y, X), P Y|X be the conditional distribution of Y|X, and P X be the marginal distribution of X. Suppose X's marginal distribution is known to be F X , and we assume U|X ∼ F(⋅| ⋅ ; ) for a distribution belonging to a parametric family.
Let S = (Y, X) be a vector of observable variables, which takes values in S .For each ∈ Θ , the following set describes the set of distributions of S that are com- patible with the model assumption: where Y|X,u is a conditional law of Y supported on G(u|x; ) , which represents the unknown selection mechanism.The set P collects the probability distributions P of (Y, X) that are compatible with for some selection mechanism.
A statistical experiment can be summarized as follows.The nature draws according to a prior distribution ∈ Δ(Θ) .Given , (U, X) are generated accord- ing to F(⋅|x; ) and F X .An outcome Y is selected from G(U|X; ) , but the decision- maker (DM) does not understand how the outcome was selected.Equivalently, the distribution P ∈ P of the realized outcome Y gets selected, but the DM does not understand how it is done.Nonetheless, the DM must make a decision based on the sample.Hence, the DM faces three types of uncertainty.Two of them, sampling and parameter uncertainty, are standard in Wald's statistical decision (5) theory.However, there is an added layer to the decision problem.The DM's theory is incomplete and does not give any guidance on the selection mechanism.Hence, we assume the DM seeks a robust statistical decision rule that attains a risk guarantee regardless of the selection mechanism.As in other decision problems, robust decision rules are appealing when the DM wants to avoid further assumptions on fine details of the environment (Gilboa & Schmeidler, 1989;Bergemann & Morris, 2005;Carroll, 2019).

Belief function
We briefly review Choquet capacities and introduce belief functions.We refer to Gilboa ( 2009) for a review of their use in decision theory and Denneberg (1994) for technical treatments.Our treatment here is based on Gilboa and Schmeidler (1994).

is additive
is totally monotone if it is nonnegative and, for any A normalized totally monotone capacity is called a belief function.
Remark 1 The definitions above are for settings with finite S .One can define capaci- ties on a more general space that can accommodate continuous covariates by adding regularity conditions.We refer to Huber and Strassen (1973) (Section 2) who define capacities on complete separable metrizable spaces.One can use the general version to incorporate continuous covariates.The convex capacities on S still retain the key properties (in particular, Eqs.(11, 12)) introduced below.
Define the law of G(U|x; ) by where C is the collection of closed subsets of Y .The map (⋅|x) is non-additive in general and hence is not a measure.One can show it is a normalized totally monotone capacity and hence a belief function (Philippe et al., 1999).Also, define The Japanese Economic Review (2023) 74:551-572 The two functions satisfy the conjugate relationship * (A|x) = 1 − (A|x) .In what follows, for any capacity , we let * denote its conjugate.We define the unconditional version of and * .For each, A ⊆ Y × X , let Note that, for any product set The core of a capacity is the set of distributions that are weakly bounded from below by : As shown in the context of cooperative games, every convex capacity has a nonempty core (Shapley, 1965).Artstein's theorem (Artstein, 1983, Theorem 2.1) allows us to characterize all distributions of measurable selections S = (Y, X) ∈ Γ(X, U; ) by the core of the belief function: This property is crucial for econometric applications because it allows us to characterize identifying restrictions by inequalities that do not involve selection mechanisms.We also exploit this property in our numerical algorithm.
For any capacity and a real-valued function f on S , the Choquet integral of f with respect to is defined by where the integrals on the right-hand side of (10) are Riemann integrals.In decision theory, Choquet integrals are used to represent preferences that exhibit ambiguity aversion (Schmeidler, 1989).An important property of the belief function is that, for any real-valued function3 (8) Hence, the infimum and supremum of the expected values of f over P coincide with the Choquet integrals with respect to and its conjugate.These properties play an important role in defining the statistical risk function in the next section.
3 Testing composite hypotheses

Theoretical background
This section reviews Kaido and Zhang's (2019) hypothesis-testing framework.Many hypotheses of practical interest are composite ones: For example, consider testing the hypotheses on subcomponents of .Let = ( � , � ) � ∈ Θ × Θ , where is a k × 1-sub-vector of interest and is a (d − k) × 1 vector of nuisance parameters Then, the two parameter sets are Θ 0 = { 0 } × Θ and Θ 1 = { ∶ ≠ 0 } × Θ .In the discrete game example, the researcher may want to test whether the players interact strategically or not, which can be formulated as a problem of testing H 0 ∶ = 0 .Similar problems arise in other incomplete models. 4he researcher's action is binary, a = 1 (reject) or a = 0 (accept).Define a loss function  ∶ Θ × {0, 1} → ℝ + by where  > 0 .The loss from a Type-I error is normalized to 1.The parameter deter- mines the trade-off between the Type-I and Type-II errors.
A statistical decision rule (or a test) is a function from S to a probability distri- bution over {0, 1} .We let Φ = Δ({0, 1}) collect all (randomized) tests.For each test ∈ Φ and ∈ Θ , define an upper risk by Recall that P collects all data-generating processes that are compatible with for some selection mechanism.The upper risk evaluates the risk under the scenario that nature generates data using a selection mechanism that is least favorable to the DM.(13) 1 3 The Japanese Economic Review (2023) 74:551-572 We use the structure of the incomplete model to reformulate the upper risk.First, for any ∈ Θ 0 , let the size of be Next, for any ∈ Θ 1 , let be the power guarantee of .This is a robust measure of power at .The value of ( ) is guaranteed to realize regardless of selection mechanisms.The following proposition shows the upper risk can be expressed using Choquet integrals of .

Proposition 1 The size and power guarantee are
The upper risk can be expressed as Equation ( 20) shows the upper risk trades off the size R 0 ( , ) and power guar- antee ( ) .We represent each object by a Choquet integral of .This formulation leads us to the Bayes-Dempster-Shafer risk below.
It remains to incorporate parameter uncertainty.For this, let be a prior distribution over Θ .We write as = 0 + (1 − ) 1 , where ∈ (0, 1) and 0 , 1 are probability measures supported on Θ 0 and Θ 1 , respectively.One can interpret as nature's drawing a parameter value from Θ 0 according to 0 with probability and from Θ 1 according to 1 with probability 1 − .We now define our main perfor- mance criterion to evaluate tests.
Definition 1 (Bayes-Dempster-Shafer (BDS) risk) For each prior ∈ Δ(Θ) and ∶ S → Δ({0, 1}) , let The BDS risk uses the prior probability to reflect parameter uncertainty, while it uses the belief function (and its conjugate) to incorporate the decision-maker's willingness to be robust against incompleteness.It can also be written as follows5 : where the integrals in ( 22) are Choquet integrals with respect to We use capacity * 0 to evaluate the average size of across Θ 0 .Similarly, we use 1 , an average belief function over Θ 1 , to evaluate the power of .We call a BDS test if it minimizes the BDS risk.
A key component of the BDS risk in ( 22) is We call this term the weighted average power guarantee.Its interpretation is similar to that of the weighted average power (Andrews & Ploberger, 1994, 1995).One may direct a test's power guarantee toward certain alternatives by choosing a suitable 1 .Similar to Bayesian hypothesis-testing problems, the BDS-risk contrasts the risks from the Type-I and Type-II errors.A difference is that they are the worstcase (in terms of selection) size and power guarantee averaged over the parameter space.The following lemma characterizes the BDS test.It uses the fact that the risk is written as in ( 22) and a result from Huber and Strassen (1973).
Lemma 1 (Lemma 5.1 in Kaido and Zhang (2019)) Let the BDS risk be defined as in (21).Suppose core( 0 ) ∩ core( 1 ) = � .Then, there exists a BDS test such that, for any  > 0, where C = ∕ (1 − ) , and Λ is a version of the Radon-Nikodym derivative dQ 1 ∕dQ 0 of the least-favorable pair such that (Q 0 , Q 1 ) ∈ core( 0 ) × core( 1 ) , and for all t ∈ ℝ + , The lemma states that a BDS test takes the form of a Neyman-Pearson test, where the likelihood-ratio Λ is formed by a representative pair (Q 0 , Q 1 ) taken from the two sets of distributions.The distribution Q 0 is compatible with the null hypothesis and maximizes the rejection probability of the test, whereas Q 1 is com- patible with the alternative hypothesis and minimizes the rejection probability of ( 22 1 3 The Japanese Economic Review ( 2023) 74:551-572 the test.The theorem requires the cores of 0 and 1 not to overlap.To ensure this condition, it is sufficient to have at least one Ā ⊂ S such that P 0 ( Ā) < P 1 ( Ā) (or P 0 ( Ā) > P 1 ( Ā) ) for all (P 0 , P 1 ) ∈ P 0 × P 1 and ( 0 , 1 ) ∈ Θ 0 × Θ 1 .In the entry game example, one may take Ā = {(1, 1)} × {x} for any x ∈ X . 6n important computational implication of Lemma 1 is that we may focus on the LFP to implement the BDS test.Furthermore, one can find the LFP by solving the following convex program (Kaido & Zhang, 2019): We use this result to develop numerical algorithms in Sect. 4.

Minimax test
Suppose the DM wants to maximize the weighted average power guarantee only among tests that keep their size below a prespecified level.The BDS tests are not designed to achieve uniform size control over Θ 0 .Hence, we need a more general treatment.
Toward this end, the following minimax theorem is useful.It characterizes the minimax test as a BDS test for the least-favorable prior (if it exists).For this, let M( 1 ) ≡ { ∶ = 0 + (1 − ) 1 , 0 ∈ Δ(Θ 0 ), ∈ (0, 1)} , where 1 is fixed.In what follows, we drop 1 from the argument of M , but its dependence should be understood.Also, we let g ∨ h be the maximum of g and h.
The objective function on the right-hand side of ( 26) is the maximum risk, sup R( , ) .Hence, † is a minimax test.Suppose is chosen so that the minimax value is less than or equal to a prespecified level ∈ (0, 1) .Then, † is a level-test.
The theorem suggests the minimax test can be approximated (in terms of risk) by a sequence of BDS tests { , = 1, 2, … } .Each BDS test minimizes the average of ( 25) R( , ) with respect to a prior .For complete models with nuisance parameters, numerical methods for approximating minimax tests have been developed (Chamberlain, 2000;Moreira & Moreira, 2013;Elliott et al., 2015).We aim to achieve this goal in incomplete models.

Numerical implementation
We develop a numerical algorithm to construct an approximating sequence to the minimax test.Our strategy is as follows.First, we approximate prior distributions over Θ 0 by finite mixtures.Second, we develop a concave program for a given that achieves the maximin risk over the mixture space.Third, we carry out a one-dimensional root-finding to find a value of that ensures uniform size control.Building on Chamberlain (2000), we consider a finite mixture for 0 .Let ∈ ℕ and Δ be the unit simplex in ℝ .Define where = ( 1 , ⋯ ,  ) � is a weight vector, and { j (⋅)} is a set of (basis) densities over Θ 0 .Let 0, ∈ M 0, be a distribution over . Lemma 1 ensures the existence of a likelihood-ratio test * that minimizes the BDS risk.Therefore, the value of the inner optimization problem on the right-hand side of ( 28) is (29) The Japanese Economic Review (2023) 74:551-572 where we let v j = j .The map ( , v 1 , … , v −1 ) is concave as shown below.
Hence, one can find a solution to (28) by maximizing subject to the constraint One may obtain finer approxi- mations to the left-hand side of (26) as increases.
be the LFP and * be the associated BDS test.Then, and is a concave function.
Step 3: Maximize subject to the constraints which is a concave program by Proposition 2. One can use the sequential quadratic programming algorithm available in common program languages.
Remark 2 Algorithm 1 computes the objective function using the size and power of the BDS test under the LFP Q 0 and Q 1 , respectively.This algorithm, therefore, bypasses explicit evaluations of the Choquet integrals.

Algorithm 2 (Inner optimization, Inputs
Step 1: For each and A ⊆ S , compute Step 2: Solve the following convex optimization problem to obtain the LFP densities (q 0 , q 1 ) : Compute the BDS test * as in ( 23) with Λ = q 1 ∕q 0 .After running Algorithm 1, one obtains ( * , v * 1 , … , v * −1 ) , which is the maxi- mum risk in (28).We write this value as a function * ( ) of the input .If one wants the test * to achieve an approximate size control, one can run a one-dimensional root-finding with respect to to solve * ( ) − = 0 .A numerical experiment sug- gests * ( ) − may have multiple roots depending on the numerical tolerances used in the optimization problems in Algorithms 1-2.In such cases, we recommend plotting * ( ) and checking if the solution found by the root-finding algorithm is a rea- sonable solution.
Remark 3 When F X does not depend on , the likelihood-ratio statistic is of the form Λ(y, x) = q 0 (y|x)∕q 1 (y|x) (see Section 5 in Kaido and Zhang (2019)), where q 0 (y|x) and q 1 (y|x) can be computed by solving, for each x, where 0, (A�x) = ∑ j=1 j ∫ Θ 0 (A�x) j ( )d and 1 (A|x) = ∫ Θ 1 (A|x)d 1 .When X is finite, one can either solve (32) once or solve (33) for each x in the support, and they yield the same BDS test.When x is continuous, one can solve (33) for each observed value of X to calculate the likelihood-ratio statistic.Hence, although finding the LFP for each x is cheap, the total computational cost can be high when the sample size is large and X is continuous.
The results are as follows.Figure 1 shows the approximated least-favorable prior densities for different values of .The densities tend to place relatively high mass near 0 but also allocate positive masses across strictly negative values of (2) .Note that the algorithm aims at approximating the optimal value rather than the optimal solution.Hence, the approximated least-favorable prior density may change depending on as shown below.
The average computation time for each case is shown in Panel A of Table 1.The computer processors we employed in all of our numerical experiments were Intel(R) Core(TM) i9-9980HK CPUs 2.40 GHz.We repeat the same computation 100 times and computed the average time to complete Algorithm 1. Panel B in Table 1 reports the LFP densities q 0 , q 1 and likelihood ratios Λ across .They are calculated by Step 2 in Algorithm 2. 8 Although the convex optimization requires the belief functions as lower bound restrictions for all possible events A ⊂ S , a computational trick is to use a subset of constraints that still characterize the sharp identifying restrictions.In the two-player entry game, there are 2 4 = 16 possible events.However, we only need to compute the belief functions and their conjugate of the following events {(0, 0), (1, 1), (1, 0)} .These restrictions are core determining in the sense of 7 Since the parameter space is (−∞, 0] , we need to assign a large negative value as lower bound and grid the space with a finite number of points for approximation purpose.Currently, we assign the lower bound to be −5 , and the total number of grid points is 110. 8For this step, we use a convex program solver, CVX: http:// cvxr.com/ cvx/ doc/ solver.html.

Table 1
Summary of the numerical illustration Panel A: Average computation time = 9   Galichon and Henry (2011). 9We set = 0.065 , under which the size of the BDS tests is below 5% across .
Table 1 shows the LFP-based likelihood-ratio Λ(y) = q 1 (y)∕q 0 (y) is high for y = (0, 1), (1, 0) and exceeds the critical value.In contrast, Λ(y) is much lower than the critical value when y = (1, 1) , and Λ(y) equals 1 when y = (0, 0) .This result can be explained as follows.First, for a given (2) , moving the value of (1) from its null value (i.e., 0) to a strictly negative value raises the probability assigned to y = (0, 1) and decreases the probability assigned to y = (1, 1) .Figure 2 shows the level sets of u ↦ G(u| ) .The red region shows {u ∶ G(u| ) = {(0, 1)}} .The probability assigned to y = (1, 0) may stay the same or could get reduced depending on the equilibrium selection.The probability of y = (0, 0) remains constant regardless.The precise LFP depends on how assigns weights across parameter values in Θ 0 and Θ 1 .The LFP- based likelihood ratio suggests, under , one should expect (on average) an increase to the probabilities of y = (0, 1), (1, 0) and a decrease to the probability of y = (1, 1) .The BDS test uses this information to make a rejection decision.

Concluding remarks
The paper applies the belief function expected utility to a hypothesis-testing problem with incompleteness.We propose a numerical method to approximate a minimax test for incomplete models.The procedure does not require any estimator of the nuisance parameter but requires optimization of prior distributions over the null parameter space.As such, it extends the type of algorithm considered by Chamberlain (2000) to incomplete models.The proposed method is designed for finite sample problems.A direction for further research is to study ways to deal with large samples.One possibility is to incorporate large-sample approximations of experiments where the penultimate equality follows from (Q 0 , Q 1 ) being the LFP, which is ensured by Lemma 1.Similarly, The first claim of the proposition follows from ( 38)-( 40).
For second claim, observe that, for any , the following function is affine in is the pointwise minimum (with respect to ) of the function above.Hence, it is concave. ◻ Critical values, Maximum BDS risk, Size and weighted average power (WAP) of BDS test