Abstract
In psychological research and practice, a person’s scores on two different traits or abilities are often compared. Such within-person comparisons require that measurements have equal units (EU) and/or equal origins: an assumption rarely validated. We describe a multidimensional SEM/IRT model from the literature and, using principles of conjoint measurement, show that its expected response variables satisfy the axioms of additive conjoint measurement for measurement on a common scale. In an application to Quality of Life data, the EU analysis is used as a pre-processing step to derive a simple structure Quality of Life model with three dimensions expressed in equal units. The results are used to address questions that can only be addressed by scores expressed in equal units. When the EU model fits the data, scores in the corresponding simple structure model will have added validity in that they can address questions that cannot otherwise be addressed. Limitations and the need for further research are discussed.
Similar content being viewed by others
Notes
Over samples, standard errors of variance estimates are unlikely to follow a normal distribution, and so the error bands in Fig. 3 should be interpreted descriptively rather than inferentially. Nevertheless, these error bands led us to examine a model with both equal units and equal variances. See below.
Jeon et al (2018) fit a model that they call the proportionality model that is a third way to parameterize the EU model. In the proportionality model, all specific factors are constrained to have equal variances. In this bi-factor parameterization, the general and specific factor coefficients for a variable are proportional, rather than equal. See Jeon et al. for details.
References
American Educational Research Association. (2014). Standards for educational and psychological measurement. Author.
Azen, R., & Budescu, D. V. (2006). Comparing predictors in multivariate regression: An extension of dominance analysis. Journal of Educational and Behavioral Statistics, 31(2), 157–180. https://doi.org/10.3102/10769986031002157
Booth, T., Murray, A. L., Overduin, M., Matthews, M., & Furnham, A. (2015). Distinguishing CEOs from top level management: A profile analysis of individual differences, career paths and demographics. Journal of Business and Psychology, 31, 205–216.
Bradlow, E. T., Wainer, H., & Wang, X. (1999). A Bayesian random effects model for testlets. Psychometrika, 64, 153–168. https://doi.org/10.1007/BF02294533
Brogden, D. V. (1993). Dominance analysis: A new approach to the problem of relative importance of predictors in multiple regression. Psychological Bulletin, 114, 542–551. https://doi.org/10.1037/0033-2909.114.3.542
Brogden, H. E. (1977). The Rasch model, the law of comparative judgment, and additive conjoint measurement. Psychometrika, 42(4), 631–634. https://doi.org/10.1007/BF02295985
Campbell, N. R. (1920). Physics: The elements (Vol. 1). Cambridge University Press.
Campbell, N. R. (1928). An account of the principles of measurement and calculation. Longmans Green.
Davison, M. L., Jew, G., & Davenport, E. C. Jr. (2014). Patterns of SAT scores, choice of STEM major, and gender. Measurement and Evaluation in Counseling and Development, 47, 118–126.https://doi.org/10.1177%2F0748175614522269
Davison, M. L., Davenport, E. C., Jr., Kohli, N., Kang, Y., & Park, K. (2021). Addressing quantitative and qualitative hypotheses using regression models with equality restrictions and predictors measured in common units. Multivariate Behavioral Research, 56(1), 86–100. https://doi.org/10.1080/00273171.2020.1754154
Dilchert, S. (2007). Peaks and valleys: Predicting interests in leadership and managerial positions from personality profiles. International Journal of Selection and Assessment, 15(2), 317–334.
Domingue, B. (2014). Evaluating the equal-interval hypothesis with test score scales. Psychometrika, 79(1), 1–19. https://doi.org/10.1007/S11336-013-9342-4
Erford, B. T. (2012). Assessment for counselors (2nd ed.). Brooks/Cole.
Green, K. E. (1986). Fundamental measurement: A review and application of additive conjoint measurement in educational testing. The Journal of Experimental Education, 54(2), 141–147. https://doi.org/10.1080/00220973.1986.10806412
Jeon, J., Rijmen, F., & Rabe-Hasketh, S. (2018). CFA models with a general factor and multiple sets of secondary factors. Psychometrika, 83, 785–808. https://doi.org/10.1007/s11336-018-9633-x
Johnson, J. W., & Lebreton, J. M. (2004). History and use of relative importance indices in organizational research. Organizational Research Methods, 7, 238–257. https://doi.org/10.1177/1094428104266510
Karabatsos, G. (2001). The Rasch model additive conjoint measurement, and new models of probabilistic measurement theory. Journal of Applied Measurement, 2(3), 389–423.
Krantz, D. H., Luce, R., Suppes, P., & Tversky, A. (1971). Foundations of measurement: Additive and polynomial representations: Vol. 1. Academic Press.
Kyngdon, A. (2011). Plausible measurement analogies to some psychometric models of test performance. British Journal of Mathematical and Statistical Psychology, 64(2), 478–497. https://doi.org/10.1348/2044-8317.002004
Luce, R. D., & Tukey, J. W. (1964). Simultaneous conjoint measurement: A new scale type of fundamental measurement. Journal of Mathematical Psychology, 1(1), 1–27. https://doi.org/10.1016/0022-2496(64)90015-X
Michell, J. (1990). A introduction to the logic of psychological measurement. Psychology Press.
Peralta, Y., Kohli, N., Lock, E. F., & Davison, M. L. (2022). Bayesian modeling of associations in bivariate linear mixed-effects models. Psychological Methods, 27(1), 46–64. https://doi.org/10.1037/met0000358
Perline, R., Wright, B. D., & Wainer, H. (1979). The Rasch model as an additive conjoint measurement. Applied Psychological Measurement, 3(2), 237–255. https://doi.org/10.1177/014662167900300213
Rijmen, F. (2010). Formal relations and an empirical comparison between the bi-factor, the testlet, and a second-order multidimensional IRT model. Journal of Educational Measurement, 47, 361–372. https://doi.org/10.1111/j.1745-3984.2010.00118.x
Rijmen, F., Jeon, M., von Davier, M., & Rabe-Hesketh, S. (2014). A third-order item response theory model for modeling the effects of domains and subdomains in large-scale educational assessment surveys. Journal of Educational and Behavioral Statistics, 39, 235–256. https://doi.org/10.3102/1076998614
Scientific Software International. IRTPRO Guide. Author, (2011). http://www.ssicentral.com/index.php/products/irt/irtpro-downloads
Shen, W. (2011). The application of a person-oriented criterion-related configural approach to the relationship between personality traits and work behaviors [Doctoral dissertation, University of Minnesota]. https://hdl.handle.net/11299/113566
Shin, T., Davison, M. L., Long, J. D., Chan, C. K., & Heistad, D. (2013). Exploring gains in reading and mathematics achievement among regular and exceptional students using growth curve modeling. Learning and Individual Differences, 23(1), 92–100. https://doi.org/10.1016/j.lindif.2012.10.002
Takane, Y., & de Leeuw, J. (1987). On the relationship between item response theory and factor analysis of discretized variables. Psychometrika, 52, 393–408. https://doi.org/10.1007/BF02294363
Thissen, D. (2013). Using the testlet response model as a shortcut to multidimensional IRT subscore computation. In R. E. Millsap et al. (eds.), New developments in quantitative psychology. Springer https://doi.org/10.1007/978-1-4614-9348-8_3
Wiernik, B. M., Wilmot, M. P., Davison, M. L., & Ones, D. S. (2021). Meta-analytic criterion profile analysis. Psychological Methods, 26(2), 186–209. https://doi.org/10.1037/met0000305
Yung, Y.-F., Thissen, D., & McLeod, L. (1999). On the relationship between the higher-order factor model and the hierarchical factor model. Psychometrika, 64, 113–128. https://doi.org/10.1007/BF02294531
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
Professor Nidhi Kohli is a co-author and Section Editor; Applications, Reviews, and Case Studies. The manuscript was submitted to the Theory and Methodology section. Seungwon Chung is an employee of the US Food and Drug Administration (FDA) and has no conflict of interest to report. The views expressed in this document are those of the authors and should not be construed to represent official FDA views or policies. The information and analyses included in this document are provided for academic research purposes only and should not be considered FDA recommended approaches. Mark L. Davison and Ernest C. Davenport, Jr. have no conflict of interest to report.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Davison, M.L., Chung, S., Kohli, N. et al. A Multidimensional Model to Facilitate Within Person Comparison of Attributes. Psychometrika (2024). https://doi.org/10.1007/s11336-023-09946-1
Received:
Published:
DOI: https://doi.org/10.1007/s11336-023-09946-1