Advertisement

Equivalence Testing for Factor Invariance Assessment with Categorical Indicators

  • W. Holmes FinchEmail author
  • Brian F. French
Conference paper
Part of the Springer Proceedings in Mathematics & Statistics book series (PROMS, volume 265)

Abstract

Factorial invariance assessment is central in the development of educational and psychological instruments. Establishing factor structure invariance is key for building a strong validity argument, and establishing the fairness of score use. Fit indices and guidelines for judging a lack of invariance is an ever-developing line of research. An equivalence testing approach to invariance assessment, based on the RMSEA has been introduced. Simulation work demonstrated that this technique is effective for identifying loading and intercept noninvariance under a variety of conditions, when indicator variables are continuous and normally distributed. However, in many applications indicators are categorical (e.g., ordinal items). Equivalence testing based on the RMSEA must be adjusted to account for the presence of ordinal data to ensure accuracy of the procedures. The purpose of this simulation study is to investigate the performance of three alternatives for making such adjustments, based on work by Yuan and Bentler (Sociological Methodology, 30(1):165–200, 2000) and Maydeu-Olivares and Joe (Psychometrika 71(4):713–732, 2006). Equivalence testing procedures based on RMSEA using this adjustment is investigated, and compared with the Chi-square difference test. Manipulated factors include sample size, magnitude of noninvariance, proportion of noninvariant indicators, model parameter (loading or intercept), and number of indicators, and the outcomes of interest were Type I error and power rates. Results demonstrated that the \( T_{3} \) statistic (Asparouhov & Muthén, 2010) in conjunction with diagonally weighted least squares estimation yielded the most accurate invariance testing outcome.

Keywords

Invariance testing Equivalence test Categorical indicator 

References

  1. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for Educational & Psychological Testing. Washington, D.C.: American Educational Research Association.Google Scholar
  2. Asparouhov, T., & Muthén, B. O. (2010). Simple second order chi-square correction. Retrieved from: http://www.statmodel.com/download/WLSMV_new_chi21.pdf.
  3. Bradley, J. V. (1978). Robustness? British Journal of Mathematical and Statistical Psychology, 31, 321–339.MathSciNetCrossRefGoogle Scholar
  4. Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling: A Multidisciplinary Journal, 14(3), 464–504.MathSciNetCrossRefGoogle Scholar
  5. Finch, W. H., & French, B. F. (2018). A simulation investigation of the performance of invariance assessment using equivalence testing procedures. Structural Equation Modeling: A Multidisciplinary Journal, 25(5), 673–686.Google Scholar
  6. Flora, D. B., & Curran, P. J. (2004). An empirical evaluation of alternative methods of estimation for confirmatory factor analysis with ordinal data. Psychological Methods, 9(4), 466–491.CrossRefGoogle Scholar
  7. French, B. F., & Finch, W. H. (2006). Confirmatory factor analytic procedures for the determination of measurement invariance. Structural Equation Modeling: A Multidisciplinary Journal, 13, 378–402.Google Scholar
  8. Kline, R. B. (2016). Principles and practice of structural equation modeling. New York: The Guilford Press.Google Scholar
  9. MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1, 130–149.CrossRefGoogle Scholar
  10. Marcoulides, K. M., & Yuan, K.-H. (2017). New ways to evaluate goodness of fit: A note on using equivalence testing to assess structural equation models. Structural Equation Modeling, 24(1), 148–153.MathSciNetCrossRefGoogle Scholar
  11. Maydeu-Olivares, A., & Joe, H. (2006). Limited information goodness-of-fit testing in multidimensional contingency tables. Psychometrika, 71(4), 713–732.MathSciNetCrossRefGoogle Scholar
  12. Millsap, R. E. (2011). Statistical approaches to measurement invariance. New York: Routledge.Google Scholar
  13. Muthén, B. (1993). Goodness of fit with categorical and other nonnormal variables. In K. A. Bollen & J. S. Long (Eds.), Testing structural equation models (pp. 205–234). Newbury Park, CA: Sage.Google Scholar
  14. Muthén, L. K., & Muthén, B. O. (1998–2016). Mplus user’s guide (7th ed.). Los Angeles, CA: Muthén & Muthén.Google Scholar
  15. Muthén, B. O., du Toit, S. H., Spisic, D. (1997). Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes. Unpublished technical report.Google Scholar
  16. R Development Core Team. (2016). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing.Google Scholar
  17. Steenkamp, J-B. E. M., & Baumgartner, H. (1998). Assessing measurement invariance in cross-national consumer research. Journal of Consumer Research, 25(1), 78–90.Google Scholar
  18. Wicherts, J. M., & Dolan, C. V. (2010). Measurement invariance in confirmatory factor analysis: An illustration using IQ test performance of minorities. Educational Measurement Issues and Practice, 29(3), 39–47.Google Scholar
  19. Wu, A. D., Li, Z., & Zumbo, B. D. (2007). Decoding the meaning of factorial invariance and updating the practice of multi-group confirmatory factor analysis: A demonstration with TIMMS data. Practical Assessment, Research & Evaluation, 12(3), 1–26.Google Scholar
  20. Yuan, K.-H., & Bentler, P. M. (2000). Three likelihood based methods for mean and covariance structure analysis with nonnormal missing data. Sociological Methodology, 30(1), 165–200.Google Scholar
  21. Yuan, K.-H., & Bentler, P. M. (2004). On chi-square difference and Z tests in mean and covariance structure analysis when the base model is misspecified. Educational and Psychological Measurement, 64, 737–757.MathSciNetCrossRefGoogle Scholar
  22. Yuan, K.-H., & Chan, W. (2016). Measurement invariance via multigroup SEM: Issues and solutions with chi-square difference tests. Psychological Methods, 21(3), 405–426.CrossRefGoogle Scholar
  23. Yuan, K.-H., Chan, W., Marcoulides, G. A., & Bentler, P. M. (2016). Assessing structural equation models by equivalence testing with adjusted fit indexes. Structural Equation Modeling: A Multidisciplinary Journal, 23(3), 319–330.MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Ball State UniversityMuncieUSA
  2. 2.Washington State UniversityPullmanUSA

Personalised recommendations