Skip to main content

Measurement Scales for Scoring or Ranking Sets of Interrelated Items

  • Chapter
  • First Online:

Abstract

Surveys concerned with human values, economic utilities, organisational features, customer or citizen satisfaction, or with preferences or choices among a set of items may aim at estimating either a ranking or a scoring of the choice set. In this paper we discuss the statistical and practical properties of five different techniques for data collection of a set of interrelated items; namely the ranking of items, the technique of picking the best/worst item, the partitioning of a fixed total among the items, the rating of each item and the paired comparison of all distinct pairs of items. Then, we discuss the feasibility of the use of each technique if a computer-assisted data-collection mode (e.g. CATI (telephone), CAPI (face-to-face), CAWI (web) or CASI (self-administered)) is adopted. The paper concludes with suggestions for the use of either technique in real survey contexts.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Conjoint analysis is a multivariate method of statistical analysis of a set of alternatives each of which is qualified by two or more joint categories.

  2. 2.

    Readers may refer to Kish (1965) for a comprehensive manual.

  3. 3.

    The family of intra-class correlated errors also includes measures of the so-called “normative” scores (Cattell 1944), which are the scores of an individual that depend upon the scores of other individuals in the population.

  4. 4.

    Even if an ipsative data-collection technique forces undue correlations on response errors, it may be worthwhile to “ipsatise” a data matrix \({\mathbf X }\) if the researcher is interested in the analysis of the differences between rows (i.e. between single individual’s scores). A matrix may be ipsatised by adding a suitable constant to the scores of a respondent (i.e. to the scores of a row) so that all the new column scores sum to the same constant (Cunningham et al. 1977). Columns of matrix \({\mathbf X }\) may be rescaled to zero-mean and unit-variance, either before or after ipsatisation.

  5. 5.

    Suppose the \(\xi _i\)s are ordered from smallest to largest; they are non-compensatory if each \(\xi _i\) \((i>j=1,\ldots , p-1)\) is larger than the sum of all \(\xi _i\)s that are smaller than it (i.e.: \(\xi _j > \sum \nolimits _{i>j} \xi _i\)).

  6. 6.

    It may be hypothesised that responses possess the ratio property, which means that the ratio between two values is logically justified. The ipsative constraint implies, instead, that only the interval scale properties apply to fixed-total data.

  7. 7.

    Krosnick and Alwin (1988) propose Gini’s measure if the researcher assumes a non-interval scale \(V_h = 1 - \sum \nolimits _j^k p_{hj}^2\) where \(p_{hj}\) is the frequency of point \(j\) in a scale of \(k\) points.

  8. 8.

    The curve representing the effort due to ranking was traced by assuming that reading, memorising and ordering an item with respect to the previous item (ranking task) implies to a respondent the same fatigue as reading and assigning a score to an item (rating task). The ideal respondent should repeat the set of memorisation and ordering for all items but those already ranked. The burden guessed for the ranking task is probably an underestimate of the fatigue a conscious respondent would put in performing his or her task. Munson and McIntyre (1979) state that ranking takes about three times longer than comparable rating.

  9. 9.

    Data-collection techniques alternative to the basic ones presented in Sect. 2 are rare but increasing in number. Louvriere et al. (2000) quote experiments by Meyer (1977), Johnson (1989), Louvriere (1993) and Chrzan (1994).

References

  • Aloysius, J. A., Davis, F. D., Wilson, D. D., Taylor, A. R., & Kottemann, J. E. (2006). User acceptance of multi-criteria decision support systems: The impact of preference elicitation techniques. European Journal of Operational Research, 169(1), 273–285.

    Article  MathSciNet  MATH  Google Scholar 

  • Alwin, D. F., & Krosnick, J. A. (1985). The measurement of values in surveys: A comparison of ratings and rankings. Public Opinion Quarterly, 49(4), 535–552.

    Article  Google Scholar 

  • Baumgartner, H., & Steenkamp, J.-B. E. M. (2001). Response styles in marketing research: A cross-national investigation. Journal of Marketing Research, 38(2), 143–156.

    Article  Google Scholar 

  • Bassi, F., & Fabbris, L. (1997). Estimators of nonsampling errors in interview-reinterview supervised surveys with interpenetrated assignments. In: L. Lyberg, P. Biemer, M. Collins, E. de Leeuw, C. Dippo, N. Schwarz & D. Trewin (Eds.), Survey measurement and process quality (pp. 733–751). New York: Wiley.

    Google Scholar 

  • Beatty, S. E., Kahle, L. R., Homer, P., & Misra, K. (1985). Alternative measurement approaches to consumer values: The list of values and the Rokeach value survey. Psychology and Marketing, 2(3), 81–200.

    Article  Google Scholar 

  • Ben-Akiva, M. E., Bradley, M., Morikawa, T., Benjamin, J., Novak, T., Oppenwal, H., & Rao, V. (1994). Combining revealed and stated preferences data. Marketing Letters, 5(4), 335–351.

    Article  Google Scholar 

  • Bettman, J. R., Johnson, E. J., Luce, M. F., & Payne, J. (1993). Correlation, conflict, and choice. Journal of Experimental Psychology, 19, 931–951.

    Google Scholar 

  • Bhat, C. (1997). An endogenous segmentation mode choice model with an application to intercity travel. Transportation Science, 31(1), 34–48.

    Article  MathSciNet  MATH  Google Scholar 

  • Brauer, A. (1957). A new proof of theorems of Perron and Frobenius on nonnegative matrices. Duke Mathematical Journal, 24, 367–368.

    Article  MathSciNet  MATH  Google Scholar 

  • Brauer, A., & Gentry, I. C. (1968). On the characteristic roots of tournament matrices. Bulletin of the American Mathematical Society, 74(6), 1133–1135.

    Article  MathSciNet  MATH  Google Scholar 

  • Brunk, H. D. (1960). Mathematical models for ranking from paired comparisons. Journal of the American Statistical Association, 55, 503–520.

    Article  MathSciNet  MATH  Google Scholar 

  • Cattell, R. B. (1944). Psychological measurement: Normative, ipsative, interactive. Psychological Review, 51, 292–303.

    Article  Google Scholar 

  • Chan, W., & Bentler, P. M. (1993). The covariance structure analysis of ipsative data. Sociological Methods Research, 22(2), 214–247.

    Article  Google Scholar 

  • Chapman, R., & Staelin, R. (1982). Exploiting rank ordered choice set data within the stochastic utility model. Journal of Marketing Research, 19(3), 288–301.

    Article  Google Scholar 

  • Chrzan, K. (1994). Three kinds of order effects in choice-based conjoint analysis. Marketing Letters, 5(2), 165–172.

    Article  Google Scholar 

  • Clemans, W. V. (1966). An analytical and empirical examination of some properties of ipsative measures. Psychometric monographs (Vol. 14). Richmond: Psychometric Society. http://www.psychometrika.org/journal/online/MN14.pdf.

  • Coombs, C. H. (1976). A theory of data. Ann Arbor: Mathesis Press.

    Google Scholar 

  • Conrad, F. G., Couper, M. P., Tourangeau, R., & Galesic, M. (2005). Interactive feedback can improve the quality of responses in web surveys. In: ESF Workshop on Internet Survey Methodology (Dubrovnik, 26–28 September 2005).

    Google Scholar 

  • Crosby, L. A., Bitter, M. J., & Gill, J. D. (1990). Organizational structure of values. Journal of Business Research, 20, 123–134.

    Article  Google Scholar 

  • Cunningham, W. H., Cunningham, I. C. M., & Green, R. T. (1977). The ipsative process to reduce response set bias. Public Opinion Quarterly, 41, 379–384.

    Article  Google Scholar 

  • Elrod, T., Louvriere, J. J., & Davey, K. S. (1993). An empirical comparison of ratings-based and choice-based conjoint models. Journal of Marketing Research, 24(3), 368–377.

    Google Scholar 

  • Fabbris, L. (2010). Dimensionality of scores obtained with a paired-comparison tournament system of questionnaire items. In: F. Palumbo, C. N. Lauro & M. J. Greenacre (Eds.), Data analysis and classification. Proceedings of the 6th Conference of the Classification and Data Analysis Group of the Società Italiana di Statistica (pp. 155–162). Berlin: Springer.

    Google Scholar 

  • Fabbris, L. (2011). One-dimensional preference imputation through transition rules. In: B. Fichet, D. Piccolo, R. Verde & M. Vichi (Eds.), Classification and multivariate analysis for complex data structures (pp. 245–252). Heidelberg: Springer.

    Chapter  Google Scholar 

  • Fabbris, L., & Fabris, G. (2003). Sistema di quesiti a torneo per rilevare l’importanza di fattori di customer satisfaction mediante un sistema CATI. In: L. Fabbris (Ed.), LAID-OUT: Scoprire i Rischi Con l’analisi di Segmentazione (p. 322). Padova: Cleup.

    Google Scholar 

  • Fellegi, I. (1964). Response variance and its estimation. Journal of the American Statistical Association, 59, 1016–1041.

    Article  MATH  Google Scholar 

  • Ganassali, S. (2008). The influence of the design of web survey questionnaires on the quality of responses. Survey Research Methods, 2(1), 21–32.

    Google Scholar 

  • Green, P. E., & Srinivasan, V. (1990). Conjoint analysis in marketing: New developments with implications for research and practice. Journal of Marketing, 54(4), 3–19.

    Article  Google Scholar 

  • Green, P. E., Krieger, A. M., & Wind, Y. (2001). Thirty years of conjoint analysis: Reflections and prospects. Interfaces, 31(3.2), 556–573.

    Google Scholar 

  • Gustafsson, A., Herrmann, A., & Huber, F. (Eds.). (2007). Conjoint measurement: Methods and applications (4th edn.). Berlin: Springer.

    Google Scholar 

  • Harzing, A.-W., et al. (2009). Rating versus ranking: What is the best way to reduce response and language bias in cross-national research? International Business Review, 18(4), 417–432.

    Article  Google Scholar 

  • Hensher, D. A. (1998). Establishing a fare elasticity regime for urban passenger transport: Non-concession commuters. Journal of Transport Economics and Policy, 32(2), 221–246.

    Google Scholar 

  • Jacoby, W. G. (2011). Measuring value choices: Are rank orders valid indicators? Presented at the 2011 Annual Meetings of the Midwest Political Science Association, Chicago, IL.

    Google Scholar 

  • Johnson, R. (1989). Making decisions with incomplete information: The first complete test of the inference model. Advances in Consumer Research, 16, 522–528.

    Article  Google Scholar 

  • Kamakura, W. A., & Mazzon, J. A. (1991). Value segmentation: A model for the measurement of values and value systems. Journal of Consumer Research, 18, 208–218.

    Article  Google Scholar 

  • Kish, L. (1965). Survey sampling. New York: Wiley.

    MATH  Google Scholar 

  • Krosnick, J. A., & Alwin, D. F. (1988). A test of the form-resistant correlation hypothesis: Ratings, rankings, and the measurement of values. Public Opinion Quarterly, 52(4), 526–538.

    Article  Google Scholar 

  • Kuhfeld, W. F., Tobias, R. B., & Garratt, M. (1994). Efficient experimental design with marketing research applications. Journal of Marketing Research, 31(4), 545–557.

    Article  Google Scholar 

  • Louvriere, J. J., Fox, M., & Moore, W. (1993). Cross-task validity comparisons of stated preference choice models. Marketing Letters, 4(3), 205–213.

    Article  Google Scholar 

  • Louvriere, J. J., Hensher, D. A., & Swait, J. D. (2000). Stated choice methods: Analysis and application. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Maio, G. R., Roese, N. J., Seligman, C., & Katz, A. (1996). Rankings, ratings, and the measurement of values: Evidence for the superior validity of ratings. Basic and Applied Social Psychology, 18(2), 171–181.

    Article  Google Scholar 

  • Martignon, L., & Hoffrage, U. (2002). Fast, frugal, and fit: Simple heuristics for paired comparisons. Theory and Decisions, 52, 29–71.

    Article  MATH  Google Scholar 

  • McCarty, J. A., & Shrum, L. J. (1997). Measuring the importance of positive constructs: A test of alternative rating procedures. Marketing Letters, 8(2), 239–250.

    Article  Google Scholar 

  • McCarty, J. A., & Shrum, L. J. (2000). The measurement of personal values in research. Public Opinion Quarterly, 64, 271–298.

    Article  Google Scholar 

  • Meyer, R. (1977). An experimental analysis of student apartment selection decisions under uncertainty. Great Plains-Rocky Mountains Geographical Journal, 6(special issue), 30–38.

    Google Scholar 

  • Moshkovich, H. M., Schellenberger, R. E., & Olson, D. L. (1998). Data influences the result more than preferences: Some lessons from implementation of multiattribute techniques in a real decision task. Decision Support Systems, 22, 73–84.

    Article  Google Scholar 

  • Munson, J. M., & McIntyre, S. H. (1979). Developing practical procedures for the measurement of personal values in cross-cultural marketing. Journal of Marketing Research, 16(26), 55–60.

    Google Scholar 

  • Ovada, S. (2004). Ratings and rankings: Reconsidering the structure of values and their measurement. International Journal of Social Research Methodology, 7(5), 404–414.

    Google Scholar 

  • Rankin, W. L., & Grube, J. W. (1980). A comparison of ranking and rating procedures for value system measurement. European Journal of Social Psychology, 10(3), 233–246.

    Article  Google Scholar 

  • Rokeach, M. (1967). Value survey. Sunnyvale: Halgren Tests (873 Persimmon Avenue).

    Google Scholar 

  • Rokeach, M. (1979). Understanding human values: Individual and societal. New York: Free Press.

    Google Scholar 

  • Rokeach, M., & Ball Rokeach, S. J. (1989). Stability and change in American value priorities. American Psychologist, 44, 775–784.

    Article  Google Scholar 

  • Saaty, T. L. (1977). A scaling method for priorities in hierarchical structures. Journal of Mathematical Psychology, 15, 234–281.

    Article  MathSciNet  MATH  Google Scholar 

  • Scheffé, H. (1952). An analysis of variance for paired comparisons. Journal of the American Statistical Association, 47, 381–400.

    MathSciNet  MATH  Google Scholar 

  • Small, K. A., & Rosen, H. S. (1981). Applied welfare economics with discrete choice models. Econometrica, 49(1), 105–130.

    Article  MathSciNet  MATH  Google Scholar 

  • Swait, J., & Adamowicz, W. (1996). The effect of choice environment and task demands on consumer behavior: Discriminating between contribution and confusion. Department of Rural Economy, Staff Paper 96–09, University of Alberta, Alberta.

    Google Scholar 

  • Vanleeuwen, D. M., & Mandabach, K. H. (2002). A note on the reliability of ranked items. Sociological Methods Research, 31(1), 87–105.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

This paper was realised thanks to a grant from the Italian Ministry of Education, University and Research (PRIN 2007, CUP C91J11002460001) and another grant from the University of Padua (Ateneo 2008, CUP CPDA081538).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luigi Fabbris .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Fabbris, L. (2013). Measurement Scales for Scoring or Ranking Sets of Interrelated Items. In: Davino, C., Fabbris, L. (eds) Survey Data Collection and Integration. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21308-3_2

Download citation

Publish with us

Policies and ethics