The recent spate of theoretical models of behaviour under ambiguity can be partitioned into two sets: those involving multiple priors and those not involving multiple priors. This paper provides an experimental investigation into the first set. Using an appropriate experimental interface we examine the fitted and predictive power of the various theories. We first estimate subject-by-subject, and then estimate and predict using a mixture model over the contending theories. The individual estimates suggest that 24% of our 149 subjects have behaviour consistent with Expected Utility, 56% with the Smooth Model, 11% with Rank Dependent Expected Utility and 9% with the Alpha Model; these figures are close to the mixing proportions obtained from the mixture estimates where the respective posterior probabilities of each of them being of the various types are 25%, 50%, 20% and 5%; and using the predictions 22%, 53%, 22% and 3%. The Smooth model appears the best.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
We should note that we share the doubts of those who wonder whether this is the appropriate characterisation of a situation of ambiguity. Such doubts include the fact that these models complicate an already complicated decision problem: for example, if someone does not know the probabilities, how can he or she attach probabilities to the various possibilities? However, at the end of the day, this is an empirical issue.
We note that this model does not use the values of the probabilities of the various possibilities, though it does use the set of possible probabilities.
In the general form of the theory, the decision-makers themselves are supposed to specify the set of possible probabilities and the probabilities attached to them; in our experiment these were objectively specified and we assume that the subjective set and the subjective probabilities are the same as the objective ones. It could be argued that this is not the specification envisaged by the authors of the Smooth Model, but that contains our specification as a special case. We also note that it is difficult to test the general version as one needs to be able to elicit not only the set of possible probabilities but also the subjective probabilities attached to them by the decision-maker.
Note that, with just two outcomes, the notion of a reference point is irrelevant.
Here the usual convention that f(x) = 0 if x < 0 never applies.
We also tried the power form f(p) = p g but this did not appear to represent an improvement.
See Andersen et al. (2006).
The number of questions in each task depended upon the initial number of one-stage lotteries in the changing task: to be precise if there were N such one-stage lotteries then there would be N decision-problems in that task. N varied across tasks.
For further details on the mixture approach in the context of choice under risk, see Conte et al. (2011).
In this case the parameter g has to be bounded below to make the function f(.) monotonically increasing over [0,1].
In this case the parameter a has to be between 0 and 1.
In a perfect world, where there are no other types other than the four included in our mixture and where each subject sticks to his or her type from the first to the last task, we would not need to introduce any penalisation. However, in an imperfect world, richer-in-parameters models are more able to attract “outliers”. For this reason, we decided to penalise richer models in favour of the EU model that has no parameters except for that of the additive error term. Our approach is inspired by Preminger and Wettstein (2005).
Obviously, we do not maximise the resulting likelihood at this stage, because we use parameter estimates.
This is all the more true for the Maxmin model proposed in Gilboa and Schmeidler (1989), of which the Alpha Model is a generalisation.
Abdellaoui, M., Baillon, A., Placido, L., & Wakker, P. (2011). The rich domain of uncertainty: source functions and their experimental implementation. American Economic Review, 101, 695–723.
Ahn, D.S., Choi, S., Gale, D., Kariv, S. (2010). Estimating ambiguity aversion in a portfolio choice experiment. Working Paper.
Andersen, S., Harrison, G. W., Lau, M. I., & Rutström, E. E. (2006). Elicitation using multiple price list formats. Experimental Economics, 9, 383–405.
Andersen, S., Fountain, J., Harrison, G.W., Rutström, E.E. (2009). Estimating aversion to uncertainty. Working Paper.
Baillon, A. (2008). Eliciting subjective probabilities through exchangeable events: an advantage and a limitation. Decision Analysis, 5, 76–87.
Camerer, C. (1995). Individual decision making. In J. Kagel, A. Roth (Eds.), Handbook of experimental economics (pp. 587–703). Princeton University Press.
Camerer, C., & Weber, M. (1992). Recent development in modelling preferences: uncertainty and ambiguity. Journal of Risk and Uncertainty, 5(4), 325–370.
Conte, A., Hey, J. D., & Moffatt, P. G. (2011). Mixture models of choice under risk. Journal of Econometrics, 162(1), 79–88.
Gajdos, T., Hayashi, T., Tallon, J. M., & Vergnaud, J. C. (2008). Attitude toward imprecise information. Journal of Economic Theory, 140, 27–65.
Ghirardato, P., Maccheroni, F., & Marinacci, M. (2004). Differentiating ambiguity and ambiguity attitude. Journal of Economic Theory, 118, 133–173.
Gilboa, I., & Schmeidler, D. (1989). Maxmin expected utility with non-unique prior. Journal of Mathematical Economics, 18, 141–153.
Greiner, B. (2004). The online recruitment system ORSEE 2.0—A guide for the organization of experiments in economics. University of Cologne Discussion Paper (www.orsee.org).
Halevy, Y. (2007). Ellsberg revisited: an experimental study. Econometrica, 75(2), 503–536.
Hey, J.D., & Pace, M. (2011). The explanatory and predictive power of non two-stage-probability theories of decision making under ambiguity. University of York Department of Economics and Related Studies Discussion Paper 11/22.
Hey, J. D., Lotito, G., & Maffioletti, A. (2010). The descriptive and predictive adequacy of theories of decision making under uncertainty/ambiguity. Journal of Risk and Uncertainty, 41(2), 81–111.
Kahneman, D., & Tversky, A. (1979). Prospect theory: an analysis of decision under risk. Econometrica, 67, 263–291.
Klibanoff, P., Marinaci, M., & Mukerji, S. (2005). A smooth model of decision making under ambiguity. Econometrica, 73, 1849–1892.
Moffat, P. G., & Peters, S. A. (2001). Testing for the presence of a tremble in economic experiments. Experimental Economics, 4, 221–228.
Preminger, A., & Wettstein, D. (2005). Using the penalized likelihood method for model selection with nuisance parameters present only under the alternative: an application to switching regression models. Journal of Time Series Analysis, 26(5), 715–741.
Quiggin, J. (1982). A theory of anticipated utility. Journal of Economic Behavior and Organization, 3, 323–343.
Schmeidler, D. (1989). Subjective probability and expected utility without additivity. Econometrica, 57, 571–587.
Segal, U. (1987). The Ellsberg Paradox and risk aversion: an anticipated utility approach. International Economic Review, 28, 175–202.
Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5, 297–323.
Wilcox, N.T. (2007). Predicting risky choices out of context: A Monte Carlo study. University of Houston Working Paper.
The authors would like to thank an anonymous referee for very helpful and sympathetic comments which led to significant improvements in the paper.
About this article
Cite this article
Conte, A., Hey, J.D. Assessing multiple prior models of behaviour under ambiguity. J Risk Uncertain 46, 113–132 (2013). https://doi.org/10.1007/s11166-013-9164-x
- Alpha model
- Expected utility
- Mixture models
- Rank dependent expected utility
- Smooth model