Abstract
Surveys concerned with human values, economic utilities, organisational features, customer or citizen satisfaction, or with preferences or choices among a set of items may aim at estimating either a ranking or a scoring of the choice set. In this paper we discuss the statistical and practical properties of five different techniques for data collection of a set of interrelated items; namely the ranking of items, the technique of picking the best/worst item, the partitioning of a fixed total among the items, the rating of each item and the paired comparison of all distinct pairs of items. Then, we discuss the feasibility of the use of each technique if a computer-assisted data-collection mode (e.g. CATI (telephone), CAPI (face-to-face), CAWI (web) or CASI (self-administered)) is adopted. The paper concludes with suggestions for the use of either technique in real survey contexts.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Conjoint analysis is a multivariate method of statistical analysis of a set of alternatives each of which is qualified by two or more joint categories.
- 2.
Readers may refer to Kish (1965) for a comprehensive manual.
- 3.
The family of intra-class correlated errors also includes measures of the so-called “normative” scores (Cattell 1944), which are the scores of an individual that depend upon the scores of other individuals in the population.
- 4.
Even if an ipsative data-collection technique forces undue correlations on response errors, it may be worthwhile to “ipsatise” a data matrix \({\mathbf X }\) if the researcher is interested in the analysis of the differences between rows (i.e. between single individual’s scores). A matrix may be ipsatised by adding a suitable constant to the scores of a respondent (i.e. to the scores of a row) so that all the new column scores sum to the same constant (Cunningham et al. 1977). Columns of matrix \({\mathbf X }\) may be rescaled to zero-mean and unit-variance, either before or after ipsatisation.
- 5.
Suppose the \(\xi _i\)s are ordered from smallest to largest; they are non-compensatory if each \(\xi _i\) \((i>j=1,\ldots , p-1)\) is larger than the sum of all \(\xi _i\)s that are smaller than it (i.e.: \(\xi _j > \sum \nolimits _{i>j} \xi _i\)).
- 6.
It may be hypothesised that responses possess the ratio property, which means that the ratio between two values is logically justified. The ipsative constraint implies, instead, that only the interval scale properties apply to fixed-total data.
- 7.
Krosnick and Alwin (1988) propose Gini’s measure if the researcher assumes a non-interval scale \(V_h = 1 - \sum \nolimits _j^k p_{hj}^2\) where \(p_{hj}\) is the frequency of point \(j\) in a scale of \(k\) points.
- 8.
The curve representing the effort due to ranking was traced by assuming that reading, memorising and ordering an item with respect to the previous item (ranking task) implies to a respondent the same fatigue as reading and assigning a score to an item (rating task). The ideal respondent should repeat the set of memorisation and ordering for all items but those already ranked. The burden guessed for the ranking task is probably an underestimate of the fatigue a conscious respondent would put in performing his or her task. Munson and McIntyre (1979) state that ranking takes about three times longer than comparable rating.
- 9.
References
Aloysius, J. A., Davis, F. D., Wilson, D. D., Taylor, A. R., & Kottemann, J. E. (2006). User acceptance of multi-criteria decision support systems: The impact of preference elicitation techniques. European Journal of Operational Research, 169(1), 273–285.
Alwin, D. F., & Krosnick, J. A. (1985). The measurement of values in surveys: A comparison of ratings and rankings. Public Opinion Quarterly, 49(4), 535–552.
Baumgartner, H., & Steenkamp, J.-B. E. M. (2001). Response styles in marketing research: A cross-national investigation. Journal of Marketing Research, 38(2), 143–156.
Bassi, F., & Fabbris, L. (1997). Estimators of nonsampling errors in interview-reinterview supervised surveys with interpenetrated assignments. In: L. Lyberg, P. Biemer, M. Collins, E. de Leeuw, C. Dippo, N. Schwarz & D. Trewin (Eds.), Survey measurement and process quality (pp. 733–751). New York: Wiley.
Beatty, S. E., Kahle, L. R., Homer, P., & Misra, K. (1985). Alternative measurement approaches to consumer values: The list of values and the Rokeach value survey. Psychology and Marketing, 2(3), 81–200.
Ben-Akiva, M. E., Bradley, M., Morikawa, T., Benjamin, J., Novak, T., Oppenwal, H., & Rao, V. (1994). Combining revealed and stated preferences data. Marketing Letters, 5(4), 335–351.
Bettman, J. R., Johnson, E. J., Luce, M. F., & Payne, J. (1993). Correlation, conflict, and choice. Journal of Experimental Psychology, 19, 931–951.
Bhat, C. (1997). An endogenous segmentation mode choice model with an application to intercity travel. Transportation Science, 31(1), 34–48.
Brauer, A. (1957). A new proof of theorems of Perron and Frobenius on nonnegative matrices. Duke Mathematical Journal, 24, 367–368.
Brauer, A., & Gentry, I. C. (1968). On the characteristic roots of tournament matrices. Bulletin of the American Mathematical Society, 74(6), 1133–1135.
Brunk, H. D. (1960). Mathematical models for ranking from paired comparisons. Journal of the American Statistical Association, 55, 503–520.
Cattell, R. B. (1944). Psychological measurement: Normative, ipsative, interactive. Psychological Review, 51, 292–303.
Chan, W., & Bentler, P. M. (1993). The covariance structure analysis of ipsative data. Sociological Methods Research, 22(2), 214–247.
Chapman, R., & Staelin, R. (1982). Exploiting rank ordered choice set data within the stochastic utility model. Journal of Marketing Research, 19(3), 288–301.
Chrzan, K. (1994). Three kinds of order effects in choice-based conjoint analysis. Marketing Letters, 5(2), 165–172.
Clemans, W. V. (1966). An analytical and empirical examination of some properties of ipsative measures. Psychometric monographs (Vol. 14). Richmond: Psychometric Society. http://www.psychometrika.org/journal/online/MN14.pdf.
Coombs, C. H. (1976). A theory of data. Ann Arbor: Mathesis Press.
Conrad, F. G., Couper, M. P., Tourangeau, R., & Galesic, M. (2005). Interactive feedback can improve the quality of responses in web surveys. In: ESF Workshop on Internet Survey Methodology (Dubrovnik, 26–28 September 2005).
Crosby, L. A., Bitter, M. J., & Gill, J. D. (1990). Organizational structure of values. Journal of Business Research, 20, 123–134.
Cunningham, W. H., Cunningham, I. C. M., & Green, R. T. (1977). The ipsative process to reduce response set bias. Public Opinion Quarterly, 41, 379–384.
Elrod, T., Louvriere, J. J., & Davey, K. S. (1993). An empirical comparison of ratings-based and choice-based conjoint models. Journal of Marketing Research, 24(3), 368–377.
Fabbris, L. (2010). Dimensionality of scores obtained with a paired-comparison tournament system of questionnaire items. In: F. Palumbo, C. N. Lauro & M. J. Greenacre (Eds.), Data analysis and classification. Proceedings of the 6th Conference of the Classification and Data Analysis Group of the Società Italiana di Statistica (pp. 155–162). Berlin: Springer.
Fabbris, L. (2011). One-dimensional preference imputation through transition rules. In: B. Fichet, D. Piccolo, R. Verde & M. Vichi (Eds.), Classification and multivariate analysis for complex data structures (pp. 245–252). Heidelberg: Springer.
Fabbris, L., & Fabris, G. (2003). Sistema di quesiti a torneo per rilevare l’importanza di fattori di customer satisfaction mediante un sistema CATI. In: L. Fabbris (Ed.), LAID-OUT: Scoprire i Rischi Con l’analisi di Segmentazione (p. 322). Padova: Cleup.
Fellegi, I. (1964). Response variance and its estimation. Journal of the American Statistical Association, 59, 1016–1041.
Ganassali, S. (2008). The influence of the design of web survey questionnaires on the quality of responses. Survey Research Methods, 2(1), 21–32.
Green, P. E., & Srinivasan, V. (1990). Conjoint analysis in marketing: New developments with implications for research and practice. Journal of Marketing, 54(4), 3–19.
Green, P. E., Krieger, A. M., & Wind, Y. (2001). Thirty years of conjoint analysis: Reflections and prospects. Interfaces, 31(3.2), 556–573.
Gustafsson, A., Herrmann, A., & Huber, F. (Eds.). (2007). Conjoint measurement: Methods and applications (4th edn.). Berlin: Springer.
Harzing, A.-W., et al. (2009). Rating versus ranking: What is the best way to reduce response and language bias in cross-national research? International Business Review, 18(4), 417–432.
Hensher, D. A. (1998). Establishing a fare elasticity regime for urban passenger transport: Non-concession commuters. Journal of Transport Economics and Policy, 32(2), 221–246.
Jacoby, W. G. (2011). Measuring value choices: Are rank orders valid indicators? Presented at the 2011 Annual Meetings of the Midwest Political Science Association, Chicago, IL.
Johnson, R. (1989). Making decisions with incomplete information: The first complete test of the inference model. Advances in Consumer Research, 16, 522–528.
Kamakura, W. A., & Mazzon, J. A. (1991). Value segmentation: A model for the measurement of values and value systems. Journal of Consumer Research, 18, 208–218.
Kish, L. (1965). Survey sampling. New York: Wiley.
Krosnick, J. A., & Alwin, D. F. (1988). A test of the form-resistant correlation hypothesis: Ratings, rankings, and the measurement of values. Public Opinion Quarterly, 52(4), 526–538.
Kuhfeld, W. F., Tobias, R. B., & Garratt, M. (1994). Efficient experimental design with marketing research applications. Journal of Marketing Research, 31(4), 545–557.
Louvriere, J. J., Fox, M., & Moore, W. (1993). Cross-task validity comparisons of stated preference choice models. Marketing Letters, 4(3), 205–213.
Louvriere, J. J., Hensher, D. A., & Swait, J. D. (2000). Stated choice methods: Analysis and application. Cambridge: Cambridge University Press.
Maio, G. R., Roese, N. J., Seligman, C., & Katz, A. (1996). Rankings, ratings, and the measurement of values: Evidence for the superior validity of ratings. Basic and Applied Social Psychology, 18(2), 171–181.
Martignon, L., & Hoffrage, U. (2002). Fast, frugal, and fit: Simple heuristics for paired comparisons. Theory and Decisions, 52, 29–71.
McCarty, J. A., & Shrum, L. J. (1997). Measuring the importance of positive constructs: A test of alternative rating procedures. Marketing Letters, 8(2), 239–250.
McCarty, J. A., & Shrum, L. J. (2000). The measurement of personal values in research. Public Opinion Quarterly, 64, 271–298.
Meyer, R. (1977). An experimental analysis of student apartment selection decisions under uncertainty. Great Plains-Rocky Mountains Geographical Journal, 6(special issue), 30–38.
Moshkovich, H. M., Schellenberger, R. E., & Olson, D. L. (1998). Data influences the result more than preferences: Some lessons from implementation of multiattribute techniques in a real decision task. Decision Support Systems, 22, 73–84.
Munson, J. M., & McIntyre, S. H. (1979). Developing practical procedures for the measurement of personal values in cross-cultural marketing. Journal of Marketing Research, 16(26), 55–60.
Ovada, S. (2004). Ratings and rankings: Reconsidering the structure of values and their measurement. International Journal of Social Research Methodology, 7(5), 404–414.
Rankin, W. L., & Grube, J. W. (1980). A comparison of ranking and rating procedures for value system measurement. European Journal of Social Psychology, 10(3), 233–246.
Rokeach, M. (1967). Value survey. Sunnyvale: Halgren Tests (873 Persimmon Avenue).
Rokeach, M. (1979). Understanding human values: Individual and societal. New York: Free Press.
Rokeach, M., & Ball Rokeach, S. J. (1989). Stability and change in American value priorities. American Psychologist, 44, 775–784.
Saaty, T. L. (1977). A scaling method for priorities in hierarchical structures. Journal of Mathematical Psychology, 15, 234–281.
Scheffé, H. (1952). An analysis of variance for paired comparisons. Journal of the American Statistical Association, 47, 381–400.
Small, K. A., & Rosen, H. S. (1981). Applied welfare economics with discrete choice models. Econometrica, 49(1), 105–130.
Swait, J., & Adamowicz, W. (1996). The effect of choice environment and task demands on consumer behavior: Discriminating between contribution and confusion. Department of Rural Economy, Staff Paper 96–09, University of Alberta, Alberta.
Vanleeuwen, D. M., & Mandabach, K. H. (2002). A note on the reliability of ranked items. Sociological Methods Research, 31(1), 87–105.
Acknowledgments
This paper was realised thanks to a grant from the Italian Ministry of Education, University and Research (PRIN 2007, CUP C91J11002460001) and another grant from the University of Padua (Ateneo 2008, CUP CPDA081538).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Fabbris, L. (2013). Measurement Scales for Scoring or Ranking Sets of Interrelated Items. In: Davino, C., Fabbris, L. (eds) Survey Data Collection and Integration. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21308-3_2
Download citation
DOI: https://doi.org/10.1007/978-3-642-21308-3_2
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-21307-6
Online ISBN: 978-3-642-21308-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)