Abstract
This chapter reviews important considerations associated with measurement, which concerns the processes for assigning numbers to reflect some phenomenon of interest. Measurement is critically important whenever quantitative data are gathered, particularly under the positivist pattern of guiding assumptions. We examine four central measurement considerations: scale, validity, reliability and sensitivity. We explore the possibilities associated with building models using measures of constructs and the role that different kinds of variables play in such models. Finally, we discuss issues you need to think about when validating your own measure or when adopting a measure created by other researchers.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Alhosany, A. (2011). Exploring organisational citizenship behaviour in the federal hospitals in the United Arab Emirates: A cross-cultural research study. Unpublished Doctor of Health Services Management thesis, School of Health, University of New England, Armidale, NSW, Australia.
Allen, N. J., & Meyer, J. P. (1990). The measurement and antecedents of affective, continuance and normative commitment to the organization. Journal of Occupational Psychology, 63(1), 1–18.
Andrich, D. (2005). Rasch models for ordered response categories. In B. S. Everitt & D. C. Howell (Eds.), Encyclopedia of statistics in behavioral science (Vol. 4, pp. 1698–1707). Chichester, UK: Wiley.
Athanasou, J. A. (1997). Introduction to educational testing. Katoomba, NSW: Social Science Press.
Bergkvist, L., & Rossiter, J. R. (2007). The predictive validity of multiple-item versus single-item measures of the same constructs. Journal of Marketing Research, 44(2), 175–184.
Bond, T. G., & Fox, C. M. (2001). Applying the Rasch model: Fundamental measurement in the human sciences. Mahwah, NJ: Lawrence Erlbaum Associates.
Brewer, J., & Hunter, A. (2006). Foundations of multimethod research: Synthesizing styles. Thousand Oaks, CA: Sage Publications.
Bryman, A., & Cramer, D. (2004). Constructing variables. In M. Hardy & A. Bryman (Eds.), Handbook of data analysis (pp. 17–34). London: Sage Publications.
Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81–105.
Carmines, E. G., & Zeller, R. A. (1979). Reliability and validity assessment. Thousand Oaks, CA: Sage Publications.
Cooksey, R. W. (2014). Illustrating statistical procedures: Finding meaning in quantitative data (2nd ed.). Prahran, Victoria: Tilde University Press.
de Vaus, D. (2002). Analyzing social science data: 50 key problems in data analysis. London: Sage Publications.
DeVellis, R. F. (2016). Scale development: Theory and applications. Los Angeles: Sage Publications.
Fischer, D. G., & Fick, C. (1993). Measuring social desirability: Short forms of the Marlowe-Crowne Social Desirability scale. Educational and Psychological Measurement, 53(2), 417–424.
Fogarty, G. J. (2008). Principles and applications of educational and psychological testing. In J. A. Athanasou (Ed.), Adult educational psychology (2nd ed., pp. 351–384). Rotterdam, Netherlands: Sense Publishers.
Gardner, D. G., Cummings, L. L., Dunham, R. B., & Pierce, J. L. (1998). Single-item versus multiple-item measurement scales: An empirical comparison. Educational and Psychological Measurement, 58(6), 898–915.
Hattie, J. A., & Cooksey, R. W. (1984). Procedures for assessing the validity of tests using the “known groups” method. Applied Psychological Measurement, 8, 295–305.
King, L. A., & King, D. W. (1990). Role conflict and ambiguity: A critical assessment of construct validity. Psychological Bulletin, 107(1), 48–64.
Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory into Practice, 41(4), 212–218.
Krosnick, J. A. (1999). Survey research. Annual Review of Psychology, 50, 537–567.
Lamprianou, I. (2008). Introduction to psychometrics. The case of Rasch models. In J. A. Athanasou (Ed.), Adult educational psychology (2nd ed., pp. 385–418). Rotterdam, Netherlands: Sense Publishers.
Lee, J. A., Soutar, G., & Louviere, J. (2008). The best–worst scaling approach: An alternative to Schwartz’s values survey. Journal of Personality Assessment, 90(4), 335–347.
National Institute of Standards and Technology (NIST). (2001). SI units: Length. Retrieved March 29, 2018, from https://www.nist.gov/pml/weights-and-measures/si-units-length.
Ostini, R., & Nering, M. L. (2006). Polytomous item response theory models. Thousand Oaks: CA Sage Publications.
Rizzo, J. R., House, R. J., & Lirtzman, S. I. (1970). Role conflict and ambiguity in complex organizations. Administrative Science Quarterly, 15(2), 150–163.
Robinson, S. B., & Leonard, K. F. (2019). Designing quality survey questions. Los Angeles: Sage Publications.
Zumbo, B. D., & Rupp, A. A. (2004). Responsible modeling of measurement data for appropriate inferences: Important advances in reliability and validity theory. In D. Kaplan (Ed.), The Sage Handbook of quantitative methodology for the social sciences (pp. 73–92). Thousand Oaks, CA: Sage Publications.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Cooksey, R., McDonald, G. (2019). When and How Should I Deal with Measurements?. In: Surviving and Thriving in Postgraduate Research. Springer, Singapore. https://doi.org/10.1007/978-981-13-7747-1_18
Download citation
DOI: https://doi.org/10.1007/978-981-13-7747-1_18
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-7746-4
Online ISBN: 978-981-13-7747-1
eBook Packages: EducationEducation (R0)