Skip to main content
Log in

Can ranking techniques elicit robust values?

  • Published:
Journal of Risk and Uncertainty Aims and scope Submit manuscript

Abstract

This paper reports two experiments which examine the use of ranking methods to elicit ‘certainty equivalent’ values. It investigates whether such methods are able to eliminate the disparities between choice and value which constitute the ‘preference reversal phenomenon’ and which thereby pose serious problems for both theory and policy application. The results show that ranking methods are vulnerable to distorting effects of their own, but that when such effects are controlled for, the preference reversal phenomenon, previously so strong and striking, is very considerably attenuated.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. In essence, the R-F effect states that the value assigned to an item—in the example from Robinson et al. (2001), that is the score on the scale from 0 to 100—reflects not only the ‘intrinsic’ value of that item but also its ranking in the set of items within which it is embedded.

  2. The experiments in question also addressed certain other anomalies, such as the ‘common ratio effect.’ However, because the total volume of evidence was too great to fit into a single paper, companion papers focus on the other anomalies and compare the patterns generated purely within pairwise choice tasks with those inferred from ranking. A more detailed account of those results can be found at http://www.uea.ac.uk/eco/people/add_files/loomes/

  3. Based on the mechanism originally proposed in Becker et al. (1963): see the booklet instructions for details.

  4. In the exposition, we shall refer to each lottery by the label given to it in Table 1. However, in order to minimise problems of unclear handwriting, the strips of card depicting lotteries actually had two-letter labels. The sure amount strips did not have labels: when recording where these came in their ranking, respondents were asked to write down the amount itself, prefaced with a £ sign.

  5. On the basis of this criterion, we found that 5 of the 162 people who took part made a one-off error in the ♣ ranking exercise, 3 made a one-off error in the ♦ ranking exercise, with one individual making one-off errors in both exercises. Four respondents’ answers to both ranking exercises were excluded from the analysis because they were deemed to have fundamentally misunderstood the task in both cases. Another 4 committed fundamental errors only in the ♦ exercise. Since the results presented in this paper involve comparisons across the ♦ and ♣ ranking exercises, the analysis is based on the 154 people not excluded from either exercise. (However, note that occasional failures by respondents to answer every question may cause the number of observations in some instances to fall below 154.)

  6. The highest ranked lottery was assigned a rank of 1, down to 10 for the lowest ranked lottery.

  7. L was slightly better ranked in its set than G was in its, and 53 respondents’ inferred values for L were at least as high as their values for G, compared with 51 in the standard task; whereas for E and N, the number fell from 58 in the standard task to 43 in the ranking task, where N's relative ranking was slightly lower than E's.

  8. Ideally, we would also have had a direct valuation exercise of the kind undertaken in the ♠ section of Experiment 1. However, we were asking respondents to undertake three ranking exercises involving 25 strips rather than two exercises involving 20 strips, and 20 pairwise choices rather than 12, and we were concerned not to overload respondents.

  9. At the time of writing, a meta-analysis of preference reversals is currently being undertaken by Nick Bardsley, Peter Moffatt, Chris Starmer and Robert Sugden: across all of the studies they have reviewed so far, regular reversals account for an average of 28% of all observations.

References

  • Becker, Gordon, Morris DeGroot, and Jacob Marschak. (1963). “Measuring Utility by a Single-Response Sequential Method,” Behavioral Science 9, 226–232.

    Article  Google Scholar 

  • Berg, Joyce, John Dickhaut, and John O’Brien. (1985). “Preference Reversal and Arbitrage,” In V. Smith (ed.), Research in Experimental Economics Vol. 3 (pp. 31–72). Greenwich: JAI Press.

    Google Scholar 

  • Bohm, Peter, and Hans Lind. (1993). “Preference Reversal, Real-world Lotteries and Lottery-interested Subjects,” Journal of Economic Behavior and Organization 22, 327–348.

    Article  Google Scholar 

  • Camerer, Colin. (1995). “Individual Decision Making.” In John Kagel and Al Roth (eds.), The Handbook of Experimental Economics. Princeton: Princeton University Press.

    Google Scholar 

  • Cox, James and Seth Epstein. (1989). “Preference Reversals Without the Independence Axiom,” American Economic Review 79, 408–426.

    Google Scholar 

  • Grether, David and Charles Plott. (1979). “Economic Theory of Choice and the Preference Reversal Phenomenon,” American Economic Review 69, 623–638.

    Google Scholar 

  • Knez, Marc and Vernon Smith. (1987). “Hypothetical Valuations and Preference Reversals in the Context of Asset Trading,” In Al Roth (ed.), Laboratory Experimentation in Economics: Six Points of View (pp. 131–154). Cambridge: Cambridge University Press.

    Google Scholar 

  • Lichtenstein, Sarah and Paul Slovic. (1971). “Reversals of Preferences Between Bids and Choices in Gambling Decisions,” Journal of Experimental Psychology 89, 46–55.

    Article  Google Scholar 

  • Lindman, Harold. (1971). “Inconsistent Preferences Among Gamblers,” Journal of Experimental Psychology 89, 390–397.

    Article  Google Scholar 

  • MacDonald, Don, William Huth, and Paul Taube. (1992). “Generalized Expected Utility Analysis and Preference Reversals: Some Initial Results in the Loss Domain,” Journal of Economic Behavior and Organization 17, 115–130.

    Article  Google Scholar 

  • Mowen, John and James Gentry. (1980). “Investigation of the Preference Reversal Phenomenon in a New Product Introduction Task,” Journal of Applied Psychology 65, 715–722.

    Article  Google Scholar 

  • Parducci, Allen and Douglas Weddell. (1986). “The Category Effect with Rating Scales: Number of Categories, Number of Stimuli and Method of Presentation,” Journal of Experimental Psychology: Human Perception and Performance 12, 496–516.

    Article  Google Scholar 

  • Reilly, Robert. (1982). “Preference Reversal: Further Evidence and Some Suggested Modifications in Experimental Design,” American Economic Review 72, 576–584.

    Google Scholar 

  • Robinson, Angela, Michael Jones-Lee, and Graham Loomes. (2001). “Visual Analog Scales, Standard Gambles and Relative Risk Aversion,” Medical Decision Making 21, 17–27.

    Article  Google Scholar 

  • Seidl, Christian. (2002). “Preference Reversal: A Literature Survey,” Journal of Economic Surveys 6, 621–655.

    Google Scholar 

  • Starmer, Chris. (2000). “Developments in Non-Expected Utility Theory: The Hunt for a Descriptive Theory of Choice Under Risk,” Journal of Economic Literature 38, 332–382.

    Google Scholar 

  • Tversky, Amos and Daniel Kahneman. (1986). “Rational Choice and the Framing of Decisions,” Journal of Business 59, S251–S278.

    Article  Google Scholar 

  • Tversky, Amos and Richard Thaler. (1990). “Anomalies: Preference Reversals,” Journal of Economic Perspectives 4, 201–211.

    Google Scholar 

Download references

Acknowledgments

This research was undertaken as part of the U.K. Economic and Social Research Council's Award M535255117. We are grateful to Shepley Orr for assistance with the conduct of the experiments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Graham Loomes.

Additional information

JEL Classification C91 · D81

Appendix: General introductory instructions for Experiment 1

Appendix: General introductory instructions for Experiment 1

We are interested in people's preferences for different chances of receiving different sums of money. A particular chance of receiving a particular sum of money will be called an option.

An option will look like this:

figure 1a

If you ended up with option X (which is only an example), it would be played out as follows. We have a bag containing 100 discs, each bearing a different number from 1 to 100 inclusive. You will dip your hand into the bag, pick a single disc and pull it out. If you happened to choose a disc with a number between 1 and 65 inclusive, the experiment would pay you £12.50 in cash; if you happened to choose a disc with a number between 66 and 100 inclusive, you would go away with nothing.

We shall be asking you three types of question, in no particular order:

  • To make choices between pairs of options

  • To rank a number of options from most to least preferred by you

  • To place values on particular options, in the form of the amounts you would sell them for

Different people with different tastes will answer the questions in different ways. We are interested in each person answering the questions according to their own tastes. To give you an incentive to answer according to your own tastes, at the end of the session, one of your decisions will be chosen at random and will be played out for real. What you get paid for taking part in this experiment will depend ENTIRELY on how your decision in that randomly-selected question turns out. So we suggest you answer each question in turn as if it is THE one on which everything depends—because that may in fact turn out to be the case.

We often run experiments using computers, but this time we are using pen and paper. To make this easier to organise, and to help with the random selection of the question which will determine your payment, we are going to divide the questions into four groups, which we shall label ♣ or ♦ or ♥ or ♠. Then at the end of the session each of you will pick a card at random from a standard pack of playing cards, and the suit will determine which group your payout question is picked from. Thereafter, any one of the decisions within that group is equally likely to be played out for real by you.

If you need clarification once the experiment has started, please do not disturb others by calling out. Please raise your hand and one of the organisers will come to you.

[INSTRUCTIONS FOR THE DIRECT VALUATION TASKS FOLLOWED. THESE AND INSTRUCTIONS FOR EXPERIMENT 2 ARE AVAILABLE ON REQUEST FROM THE CORRESPONDING AUTHOR]

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bateman, I., Day, B., Loomes, G. et al. Can ranking techniques elicit robust values?. J Risk Uncertainty 34, 49–66 (2007). https://doi.org/10.1007/s11166-006-9003-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11166-006-9003-4

Keywords

Navigation