Skip to main content

Psychometrics and the Measurement of Emotional Intelligence

  • Chapter
  • First Online:
Book cover Assessing Emotional Intelligence

Part of the book series: The Springer Series on Human Exceptionality ((SSHE))

It may be suggested that the measurement of emotional intelligence (EI) has been met with a non-negligible amount of scepticism and criticism within academia, with some commentators suggesting that the area has suffered from a general lack of psychometric and statistical rigour (Brody, 2004). To potentially help ameliorate this noted lack of sophistication, as well as to facilitate an understanding of many of the research strategies and findings reported in the various chapters of this book, this chapter will describe and elucidate several of the primary psychometric considerations in the evaluation of an inventory or test purported to measure a particular attribute or construct. To this effect, two central elements of psychometrics, reliability and validity, will be discussed in detail. Rather than assert a position as to whether the scores derived from putative measures of EI may or may not be associated with adequate levels of reliability and/or validity, this chapter will focus primarily on the description of contemporary approaches to the assessment of reliability and validity. However, in many cases, comments specifically relevant to the area of EI will be made within the context of reliability and/or validity assessment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Occasionally read in the contemporary literature is internal consistency reliability based on the Kuder–Richardson 20 formula. The KR20 reliability procedure predates Cronbach’s α, but was limited to dichotomously scored items from which “proportion correct” and “proportion incorrect” information could be derived for each item. When the items are of equal difficulty, a more simplified formulation can be used to estimate reliability (i.e., KR21).

  2. 2.

    \(\omega _A = \displaystyle\frac{{\sum {(.70 + .70 + .70)^2 } }}{{\sum {(.70 + .70 + .70)^2 + (} .51 + .51 + .51)}} = \frac{{2.10^2 }}{{2.10^2 + 1.53}} = \frac{{4.41}}{{5.94}} = .74\)

  3. 3.

    Of course, the interface between validity and reliability is further blurred by the close correspondence between factorial validity and internal consistency reliability.

  4. 4.

    Some investigators have erroneously equated true scores with constructs scores (e.g., Schmidt & Hunter, 1999). It should be noted that scores devoid of measurement error (i.e., true scores) are not necessarily scores associated with any construct validity (Borsboom & Mellenbergh, 2002).

  5. 5.

    While it is true that split-half reliability estimates and Cronbach’s α estimates will not usually yield the same values, such a large discrepancy can not reasonably be expected to be due to the different formulations.

  6. 6.

    Technically, the Vocabulary measure consisted of only 60% of the items of a full Vocabulary subtest, as only 30 of the 50 items from the standard Vocabulary subtest were chosen in the Mayer, Caruso, and Salovey (2000) study.

References

  • Alwin, D. F., & Hauser, R. M. (1975). The decomposition of effects in path analysis. American Sociological Review, 40, 37–47.

    Article  Google Scholar 

  • Anastasi, A. (1996). Psychological testing (7th ed.). New York: Macmillan.

    Google Scholar 

  • Angoff, W. H. (1988). Validity: An evolving concept. In H. Wainer & H. Braun (Eds.), Test Validity (pp. 9–13). Hillsdale, NJ: Lawrence Elrbaum.

    Google Scholar 

  • Baugh, F. (2002). Correcting effect sizes for score reliability: A reminder that measurement and substantive issues are linked inextricably. Educational and Psychological Measurement, 62, 254–263.

    Article  Google Scholar 

  • Block, J. (2000). Three tasks for personality psychology. In L. R. Bergman, R. B. Cairns, L. G. Nilsson, & L. Nystedt (Eds.). Developmental science and the holistic approach, (pp. 155–164). Mahwah, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Bollen, K. A. (1989). Structural equations with latent variables. New York: Wiley-Interscience.

    Google Scholar 

  • Borsboom, D. (2005). Measuring the mind: Conceptual issues in contemporary psychometrics. Cambridge: Cambridge University Press.

    Google Scholar 

  • Borsboom, D., & Mellenbergh, G. J. (2002). True scores, latent variables, and constructs: A comment on Schmidt and Hunter. Intelligence, 30, 505–514.

    Article  Google Scholar 

  • Brackett, M. A., & Mayer, J. D. (2003).  Convergent, discriminant, and incremental validity of competing measures of emotional intelligence.  Personality and Social Psychology Bulletin, 29, 1147–1158.

    Article  PubMed  Google Scholar 

  • Brennan, R. (2001). An essay on the history and future of reliability from the perspective of replications. Journal of Educational Measurement, 38, 295–317.

    Article  Google Scholar 

  • Brody, N. (2004). Emotional intelligence: Science and myth (book review). Personality and Individual Differences, 32, 109–111.

    Google Scholar 

  • Brownell, W. A. (1933). On the accuracy with which reliability may be measured by correlating test halves. Journal of Experimental Education, 1, 204–215.

    Google Scholar 

  • Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81–105.

    Article  PubMed  Google Scholar 

  • Cattell, R. B., & Warburton, F. W. (1967). Objective personality and motivation tests: A theoretical introduction and practical compendium. Champaign, IL: University of Illinois Press.

    Google Scholar 

  • Cliff, N. (1983). Some cautions concerning the application of causal modeling methods. Multivariate Behavior Research, 18, 115–126.

    Article  Google Scholar 

  • Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159.

    Article  PubMed  Google Scholar 

  • Cohen, P., Cohen, J., Teresi, J., Marchi, M., & Velez, C. N. (1990). Problems in the measurement of latent variables in structural equations causal modeling. Applied Psychological Measurement, 14, 183–196.

    Article  Google Scholar 

  • Cortina, J. M. (1993). What is Coefficient Alpha? An examination of theory and applications. Journal of Applied psychology, 78, 98–104.

    Article  Google Scholar 

  • Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334.

    Article  Google Scholar 

  • Cronbach, L. J. (1960). Essentials of psychological testing (2nd ed.). New York: Harper & Brothers.

    Google Scholar 

  • Fan, X. (2003). Two approaches for correcting correlation attenuation caused by measurement error: Implications for research practice. Educational and Psychological Measurement, 63, 915–930.

    Article  Google Scholar 

  • Gignac, G. E. (2006). Testing jingle-jangle fallacies in a crowded market of over-expansive constructs: The case of emotional intelligence. In C. Stough, D. Saklofske, & K. Hansen (Eds.), Research on emotional intelligence: International symposium 2005 (pp. 3–13). Melbourne: Tertiary Press.

    Google Scholar 

  • Gignac, G. E. (2007). Working memory and fluid intelligence are both identical to g?! Reanalyses and critical evaluation. Psychological Science, 49, 187–207.

    Google Scholar 

  • Gignac, G. E., Bates, T. C., & Lang, K. (2007). Implications relevant to CFA model misfit, reliability, and the five factor model as measured by the NEO-FFI. Personality and Individual Differences, 43, 1051–1062.

    Article  Google Scholar 

  • Gignac, G. E., Palmer, B., & Stough, C. (2007). A confirmatory factor analytic investigation of the TAS-20: Corroboration of a five-factor model and suggestions for improvement. Journal of Personality Assessment, 89, 247–257.

    Article  PubMed  Google Scholar 

  • Green, S. B., Lissitz, R. W., & Mulaik, S. A. (1977). Limitations of coefficient alpha as an index of test unidimensionality. Educational and Psychological Measurement, 37, 827–833.

    Article  Google Scholar 

  • Guilford, J. P. (1946). New standards for test evaluation. Educational and Psychological Measurement, 6, 427–439.

    Google Scholar 

  • Guin, R. M. (1977). Content validity – The source of my discontent. Applied Psychological Measurement, 1, 1–10.

    Article  Google Scholar 

  • Hancock, G. R., & Mueller, R. O. (2001). Rethinking construct reliability within latent variable systems. In R. Cudeck, S. du Toit, & D. Sorebom (Eds.), Structural equation modeling: Present and future – A festschrift in honor of Karl Jöreskog (pp. 195–216). Lincolnwood, IL: Scientific Software International.

    Google Scholar 

  • Harlow, L. L., Mulaik, S. A., & Steiger, J. A. (Eds.). (1997). What if there were no significance tests? Mahwah, NJ: Erlbaum.

    Google Scholar 

  • Hemphill, J. F. (2003). Interpreting the magnitudes of correlation coefficients. American Psychologist, 58, 78–80.

    Article  PubMed  Google Scholar 

  • Hunsley, J., & Meyer, G. J. (2003). The incremental validity of psychological testing and assessment: Conceptual, methodological, and statistical issues. Psychological Assessment, 15, 446–455.

    Article  PubMed  Google Scholar 

  • Lance, C. E., Butts, M. M., Michels, L. C. (2006). The sources of four commonly reported cutoff criteria: What did they really say? Organizational Research Methods, 9, 202–220.

    Article  Google Scholar 

  • Landy, F. J. (2005). Some historical and scientific issues related to research on emotional intelligence. Journal of Organizational Behavior, 26, 411–424.

    Article  Google Scholar 

  • Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. Reading, MA: Addison-Wesley.

    Google Scholar 

  • Marsh, H. W., & Byrne, B. W. (1993). Confirmatory factor analysis of multitrait-multimethod self-concept data: Between-group and within-group invariance constraints. Multivariate Behavioral Research, 28, 313–449.

    Article  Google Scholar 

  • Matarazzo, J. D., & Herman, D. O. (1984). Base rate data for the WAIS-R: Test–retest stability and VIQ–PIQ differences. Journal of Clinical and Experimental Neuropsychology, 6, 351–366.

    Article  Google Scholar 

  • Matthews, G., Zeidner, M., Roberts, R. D. (2002). Emotional intelligence: Science and myth. Cambridge, MA: MIT Press.

    Google Scholar 

  • Mayer, J. D, Caruso, D. R., & Salovey, P. (2000). Emotional intelligence meets traditional standards for an intelligence. Intelligence, 27, 267–298.

    Article  Google Scholar 

  • Mayer, J. D., Salovey, P., & Caruso, D. (2000). Models of emotional intelligence. In R. J. Sternberg (Ed.), The handbook of intelligence (pp. 396–420). New York: Cambridge University Press.

    Google Scholar 

  • Mayer, J. D., Salovey, P., Caruso, D. R., & Sitarenios, G. (2003). Measuring emotional intelligence with the MSCEIT V2.0. Emotion, 3, 97–105.

    Article  PubMed  Google Scholar 

  • McCrae, R. R. (2000). Emotional intelligence from the perspective of the five-factor model of personality. In R. Bar-On & J. D. A. Parker (Eds.), Handbook of emotional intelligence (pp. 263–276). San Francisco: Jossey-Bass.

    Google Scholar 

  • McCrae, R. R., & John, O. P. (1992). An introduction to the five-factor model and its applications. Journal of Personality, 60, 175–215.

    Article  PubMed  Google Scholar 

  • McDonald, R. P. (1970). The theoretical foundations of principal factor analysis, canonical factor analysis, and alpha factor analysis. British Journal of Statistical and Mathematical Psychology, 23, 1–21.

    Article  Google Scholar 

  • McGrath, R. E. (2005). Conceptual complexity and construct validity. Journal of Personality Assessment, 85, 112–124.

    Article  PubMed  Google Scholar 

  • Muchinsky P. M. (1996). The correction for attenuation. Educational & Psychological Measurement, 56, 63–75.

    Article  Google Scholar 

  • Novick, M. R., & Lewis, C. L. (1967). Coefficient alpha and the reliability of composite measurements. Psychometrika, 32, 1–13.

    Article  PubMed  Google Scholar 

  • Nunnally, J. C. (1978). Psychometric theory. New York: McGraw-Hill.

    Google Scholar 

  • Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory. New York: McGraw-Hill.

    Google Scholar 

  • Parker, J. D. A., Bagby, R. M., Taylor, G. J., Endler, N. S., & Schmitz, P. (1993). Factorial validity of the twenty-item Toronto Alexithymia Scale. European Journal of Personality, 7, 221–232.

    Article  Google Scholar 

  • Pedhazur, E. J. (1997). Multiple regression in behavioral research. Orlando, FL: Harcourt Brace.

    Google Scholar 

  • Peterson, R. A. (1994). A meta-analysis of Cronbach’s alpha. Journal of Consumer Research, 21, 381–391.

    Article  Google Scholar 

  • Raykov, T. (2001). Bias in coefficient alpha for fixed congeneric measures with correlated errors. Applied Psychological Measurement, 25(1), 69–76.

    Article  Google Scholar 

  • Reuterberg, S. E., & Gustafsson, J.-E. (1992). Confirmatory factor analysis and reliability: Testing measurement model assumptions. Educational and Psychological Measurement, 52, 795–811.

    Article  Google Scholar 

  • Schmidt, F. L., & Hunter, J. E. (1999). Theory testing and measurement error. Intelligence, 27, 183–198.

    Article  Google Scholar 

  • Shavelson, R. J., Webb, N. M., & Rowley, G. L. (1989). Generalizability theory. American Psychologist, 44, 922–932.

    Article  Google Scholar 

  • Sireci, S. G. (1998). Gathering and analyzing content validity data. Educational Assessment, 5(4), 299–312.

    Article  Google Scholar 

  • Spearman, C. (1904). The proof and measurement of association between two things. American Journal of Psychology, 15, 72–101.

    Article  Google Scholar 

  • Spector, P. E. (1994). Using self-report questionnaires in OB research: A comment on the use of a controversial method. Journal of Organizational Behavior, 15, 385–392.

    Article  Google Scholar 

  • Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103, 677–680.

    Article  Google Scholar 

  • Thompson, B., & Vacha-Haase, T. (2000). Psychometrics and datametrics: The test is not reliable. Educational and Psychological Measurement, 60, 174–195.

    Google Scholar 

  • Tomarken, A. J., & Waller, N. G. (2003). The problems with “well-fitting” models. Journal of Abnormal Psychology, 112, 578–598.

    Article  PubMed  Google Scholar 

  • Werts, C. E., & Watley, D. J. (1968). Analyzing school effects: How to use the same data to support different hypotheses. American Educational Research Journal, 5, 585–598.

    Google Scholar 

  • Zeidner, M., Matthews, G., & Roberts, R. D. (2001). Slow down, you move too fast: Emotional intelligence remains an “elusive” construct. Emotion, 1, 265–275.

    Article  PubMed  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gilles E. Gignac .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Gignac, G.E. (2009). Psychometrics and the Measurement of Emotional Intelligence. In: Parker, J., Saklofske, D., Stough, C. (eds) Assessing Emotional Intelligence. The Springer Series on Human Exceptionality. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-88370-0_2

Download citation

Publish with us

Policies and ethics