testing for measurement invariance by detecting local misspecification and an illustration across online and paper-and-pencil samples
- 113 Downloads
Political scientists often need to evaluate whether samples are comparable, for example, when analysing different countries or time points or when comparing data collected using different methods. A necessary condition for conducting such meaningful cross-group comparisons is the establishment of measurement invariance. One of the most frequently used procedures for establishing measurement invariance is the multigroup confirmatory factor analysis. This method was criticised in the literature because it may suggest that a model fits the data although it may contain serious misspecifications. We present an alternative method to test for measurement invariance using detection of local misspecifications and illustrate its use on two data sets assessing value priorities that are often analysed in political science and collected using paper-and-pencil and web modes of data collection.
Keywordsmeasurement invariance detection for misspecification multigroup confirmatory factor analysis (MGCFA) human values statistical power mode effects
The work of the first, second, and fourth authors was supported by the University Research Priority Program (URPP) ‘Social Networks’, University of Zürich. The work of the third author was supported by the Netherlands Organization for Scientific Research (NWO) [Vici grant 453-10-002]. The second author would like to thank the EUROLAB, GESIS, Cologne, for their hospitality during work on this article. The authors would also like to thank Lisa Trierweiler for the English proof of the manuscript.
- Asparouhov, T. and Muthén, B.O. (2013) Multiple group factor analysis alignment. Mplus Web Note No. 18, Version 3, available at http://www.statmodel.com/examples/webnote.shtml, accessed 23 August 2013.
- Bollen, K.A. (1989) Structural Equation Modeling with Latent Variables, New York: Wiley.Google Scholar
- Brown, T.A. (2006) Confirmatory Factor Analysis for Applied Research, New York: Guilford Press.Google Scholar
- Brown, T.A. and Cudeck, R. (1993) ‘Alternative Ways of Assessing Model Fit’, in K.A. Bollen and J.S. Long (eds.) Testing Structural Equation Models, Newbury Park, CA: Sage, pp. 136–162.Google Scholar
- Cieciuch, J. and Davidov, E. (2012) ‘A comparison of the invariance properties of the PVQ-40 and the PVQ-21 to measure human values across German and Polish samples’, Survey Research Methods 6 (1): 37–48.Google Scholar
- Cieciuch, J., Davidov, E., Schmidt, P., Algesheimer, R. and Schwartz, S.H. (2014) ‘Comparing results of an exact versus an approximate (Bayesian) measurement invariance test: A cross-country illustration with a scale to measure 19 human values’, Frontiers in Psychology 5: 982, doi:10.3389/fpsyg.2014.00982.CrossRefGoogle Scholar
- Cohen, J. (1988) Statistical Power Analysis for the Behavioral Sciences, 2nd ed. New York: Academic Press.Google Scholar
- de Beuckelaer, A. and Swinnen, G. (2011) ‘Biased Latent Variable Mean Comparisons Due to Measurement Noninvariance: A Simulation Study’, in E. Davidov, P. Schmidt and J. Billiet (eds.) Cross-Cultural Research: Methods and Applications, New York: Routledge, pp. 117–147.Google Scholar
- De Leeuw, E.D. (2005) ‘To mix or not to mix data collection modes in surveys’, Journal of Official Statistics 21 (5): 233–255.Google Scholar
- Dillman, D.A., Smyth, J.D. and Christian, L.M. (2009) Internet, Mail and Mixed-Mode Surveys. The Tailored Design Method, Hoboken, NJ: John Wiley & Sons.Google Scholar
- Hu, L.T. and Bentler, P.M. (1995) ‘Evaluating Model Fit’, in R. Hoyle (ed.) Structural Equation Modeling: Issues, Concepts, and Applications, Newbury Park, CA: Sage, pp. 76–99.Google Scholar
- Jöreskog, K.G. and Sörbom, D. (2001) LISREL 8: User’s Reference Guide, Lincolnwood: Scientific Software International.Google Scholar
- Marsh, H.W., Hau, K.T. and Grayson, D. (2005) ‘Goodness of Fit in Structural Equation Models’, in A. Maydeu-Olivares and J.J. McArdle (eds.) Contemporary Psychometrics, Mahwah, NJ: Lawrence Erlbaum Associates, pp. 275–340.Google Scholar
- Meuleman, B. (2012) ‘When are Intercept Differences Substantively Relevant in Measurement Invariance Testing?’ in S. Salzborn, E. Davidov and J. Reinecke (eds.) Methods, Theories, and Empirical Applications in the Social Sciences: Festschrift for Peter Schmidt, Heidelberg: Springer VS, pp. 97–104.CrossRefGoogle Scholar
- Millsap, R.E. (2011) Statistical Approaches to Measurement Invariance, New York: Routledge.Google Scholar
- Muthén, B.O. and Asparouhov, T. (2013) BSEM measurement invariance analysis. Mplus Web Note No. 17, available at http://www.statmodel.com/examples/webnote.shtml, accessed 11 January 2013.
- Muthén, L.K. and Muthén, B.O. (1998-2012) Mplus User’s Guide, Seventh edn Los Angeles, CA: Muthén & Muthén.Google Scholar
- Oberski, D.L. (2009) Jrule for Mplus version 0.91 (beta) [Computer software], available at https://github.com/daob/JruleMplus/wiki, accessed 1 June 2015.
- Saris, W.E. and Gallhofer, I.N. (2007) Design, Evaluation, and Analysis of Survey Research, Hoboken, NJ: John Wiley & Sons.Google Scholar
- Saris, W.E. and Hagenaars, J.A. (1997) ‘Mode Effects in the Standard Eurobarometer Questions’, in W.E. Saris and M. Kaase (eds.) Eurobarometer. Measurement Instruments for Opinions in Europe, Mannheim: ZUMA, pp. 87–100.Google Scholar
- Schwartz, S. H., Caprara, G. V., Vecchione, M., Bain, P., Bianchi, G., Caprara, M. G., Cieciuch, J., Kirmanoglu, H., Baslevent, C., Lönnqvist, J-E., Mamali, C., Manzi, J., Pavlopoulos, V., Posnova, T., Schoen, H., Silvester, J., Tabernero, C., Torres, C., Verkasalo, M., Vondráková, E., Welzel, C. and Zaleski, Z. (2014) ‘Basic personal values underlie and give coherence to political values: A cross national study in 15 countries’, Political Behavior 36 (4): 899–930.CrossRefGoogle Scholar
- Schwartz, S.H., Cieciuch, J., Vecchione, M., Davidov, E., Fischer, R., Beierlein, C., Ramos, A., Verkasalo, M., Lönnqvist, J.-E., Demirutku, K., Dirilen-Gumus, O. and Konty, M. (2012) ‘Refining the theory of basic individual values’, Journal of Personality and Social Psychology 103 (4): 663–688.CrossRefGoogle Scholar
- Steinmetz, H. (2011) ‘Estimation and Comparison of Latent Means across Cultures’, in E. Davidov, P. Schmidt and J. Billiet (eds.) Cross-Cultural Analysis: Methods and Applications, New York: Routledge, pp. 85–116.Google Scholar
- van de Schoot, R., Kluytmans, A., Tummers, L., Lugtig, P., Hox, J. and Muthén, B. (2013) ‘Facing off with Scylla and Charybdis: A comparison of scalar, partial, and the novel possibility of approximate measurement invariance’, Frontiers in Psychology 4: 770, doi:10.3389/fpsyg.2013.00770.CrossRefGoogle Scholar
- van der Veld, W.M. and Saris, W.E. (2011) ‘Causes of Generalized Social Trust’, in E. Davidov, P. Schmidt and J. Billiet (eds.) Cross-Cultural Analysis: Methods and Applications, New York: Routledge, pp. 207–247.Google Scholar
- van der Veld, W.M., Saris, W.E. and Satorra, A. (2008) JRule 2.0: User manual. Unpublished document.Google Scholar
- Vecchione, M., Caprara, G. V., Schwartz, S. H., Cieciuch, J., Schoen, H., Silvester, J., Bain, P., Bianchi, G., Kirmanoglu, H., Baslevent, C., Mamali, C., Manzi, J., Pavlopoulos, V., Posnova, T., Torres, C., Verkasalo, M., Lönnqvist, J-E, Vondráková, E. and Alessandri, G. (2015) ‘Personal values and political activism: A cross-national study’, British Journal of Psychology 106 (1): 84–106.CrossRefGoogle Scholar