Quality and Quantity

, Volume 37, Issue 4, pp 393–409 | Cite as

Effect Sizes in Qualitative Research: A Prolegomenon

  • Anthony J. Onwuegbuzie

Abstract

The American Psychological Association Task Force recommended that researchers always report and interpret effect sizes for quantitative data. However, no such recommendation was made for qualitative data. Thus, the first objective of the present paper is to provide a rationale for reporting and interpreting effect sizes in qualitative research. Arguments are presented that effect sizes enhance the process of verstehen/hermeneutics advocated by interpretive researchers. The second objective of this paper is to provide a typology of effect sizes in qualitative research. Examples are given illustrating various applications of effect sizes. For instance, when conducting typological analyses, qualitative analysts only identify emergent themes; yet, these themes can be quantitized to ascertain the hierarchical structure of emergent themes. The final objective is to illustrate how inferential statistics can be utilized in qualitative data analyses. This can be accomplished by treating words arising from individuals, or observations emerging from a particular setting, as sample units of data that represent the total number of words/observations existing from that sample member/context. Heuristic examples are provided to demonstrate how inferential statistics can be used to provide more complex levels of verstehen than is presently undertaken in qualitative research.

effect sizes qualitative research quantitize meta-theme inter-respondent matrix intra-respondent matrix 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Allport, G. W. (1937). Personality: A Psychological interpretation. New York: Holt.Google Scholar
  2. Allport, G. W. (1962). The general and the unique in psychological science. Journal of Personality 30: 405–422.Google Scholar
  3. Allport, G. W. (1966). Traits revisited. American Psychologist 21: 1–10.Google Scholar
  4. American Educational Research Association. (2000). American Educational Research Association 2001 Annual Meeting Call for Proposals. Educational Researcher 29(4): 27–41.Google Scholar
  5. Berg, B.L. (1989). Qualitative Research Methods for the Social Sciences. Boston, MA: Allyn & Bacon.Google Scholar
  6. Bernstein, I. H. & Teng, G. (1989). Factoring items and factoring scales are different: Spurious evidence for multidimensionality due to item categorization. Psychological Bulletin 105: 467–477.Google Scholar
  7. Block, J. (1957). A comparison between ipsative and normative ratings of personality. Journal of Abnormal and Social Psychology 54: 50–54.Google Scholar
  8. Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York: John Wiley.Google Scholar
  9. Colaizzi, P. F. (1978). Psychological research as the phenomenologist views it. In: R. Vaile & M. King (eds.), Existential Phenomenological Alternatives for Psychology. New York: Oxford University Press, pp. 48–71.Google Scholar
  10. Daniel, L. G. (1998a). Statistical significance testing: A historical overview of misuse and misinterpretation with implications for editorial policies of educational journals. Research in the Schools 5: 23–32.Google Scholar
  11. Daniel, L. G. (1998b). The statistical significance controversy is definitely not over: A rejoinder to responses by Thompson, Knapp, and Levin. Research in the Schools 5: 63–65.Google Scholar
  12. Glaser, B. G. & Strauss, A. L. (1967). The Discovery of Grounded Theory: Strategies for Qualitative Research. Chicago: Aldine.Google Scholar
  13. Goetz, J. P. & Lecompte, M. D. (1984). Ethnography and the Qualitative Design in Educational Research. New York: Academic Press.Google Scholar
  14. Hetzel, R. D. (1996). A primer on factor analysis with comments on patterns of practice and reporting.In: B. Thompson (ed.), Advances in Social Science Methodology (Vol 4). Greenwich, CT: JAI Press, pp. 175–206.Google Scholar
  15. Lincoln, Y. S. & Guba, E. G. (1985). Naturalistic Inquiry. Beverly Hills, CA: Sage.Google Scholar
  16. Miles, M. & Huberman, M. (1994). Qualitative Data Analysis: an Expanded Sourcebook (2nd ed.).Thousand Oaks, CA: Sage.Google Scholar
  17. Newman, I. & Benz, C. R. (1998). Qualitative-quantitative Research Methodology: Exploring the Interactive Continuum. Illinois: Southern Illinois University Press.Google Scholar
  18. Onwuegbuzie, A. J. (2000a, November). Validity and Qualitative Research: An Oxymoron? Paper presented at the annual meeting of the Association for the Advancement of Educational Research (AAER), Ponte Vedra, Florida.Google Scholar
  19. Onwuegbuzie, A. J. (2000b). On Becoming a Bi-researcher: The Importance of Combining Quantitative and Qualitative Research Methodologies. Paper presented at the annual meeting of the Association for the Advancement of Educational Research (AAER), Ponte Vedra, Florida.Google Scholar
  20. Onwuegbuzie, A. J. (in press-a). Common analytical and interpretational errors in educational research. Educational Research Quarterly.Google Scholar
  21. Onwuegbuzie, A. J. (in press-b). Expanding the framework of internal and external validity in quantitative research. Research in the Schools.Google Scholar
  22. Onwuegbuzie, A. J.,& Daniel, L.G. (2002). Uses and misuses of the correlation coefficient. Research in the Schools 9: 73–90.Google Scholar
  23. Onwuegbuzie, A. J. & Daniel, L. G. (in press). Typology of analytical and interpretational errors in quantitative and qualitative educational research. Current Issues in Education.Google Scholar
  24. Onwuegbuzie, A. J. & Teddlie, C. (2002). A framework for analyzing data in mixed methods research. In A. Tashakkori & C. Teddlie (eds.), Handbook of Mixed Methods in Social and Behavioral Research. Thousand Oaks, CA: Sage, pp. 351–383.Google Scholar
  25. Sechrest, L. & Sidana, S. (1995). Quantitative and qualitative methods: Is there an alternative? Evaluation and Program Planning 18: 77–87.Google Scholar
  26. Stephenson, W. (1953). The Study of Behavior. Chicago: University of Chicago Press.Google Scholar
  27. Tashakkori, A. & Teddlie, C. (1998). Mixed Methodology: Combining Qualitative and Quantitative Approaches. Applied Social Research Methods Series (Vol. 46). Thousand Oaks, CA: Sage.Google Scholar
  28. Tashakkori, A. & Teddlie, C. (in press). Issues and dilemmas in teaching research methods courses in social and behavioral sciences: A U.S. perspective. International Journal of Social Research Methodology.Google Scholar
  29. Thompson, B. (1998a). Five methodological errors in educational research: The pantheon of statistical significance and other faux pas. Paper presented at the annual meeting of the American Educational Research Association, San Diego, CA.Google Scholar
  30. Thompson, B. (1998b). Statistical testing and effect size reporting: Portrait of a possible future. Research in the Schools 5: 33–38.Google Scholar
  31. Thompson, B. (1999). Common methodology mistakes in educational research, revisited, along with a primer on both effect sizes and the bootstrap. Invited address presented at the annual meeting of the American Educational Research Association, Montreal [On-line]. Available: http://acs.tamu.edu/ bbt6147/aeraad99.htmGoogle Scholar
  32. Thompson, B. & Daniel, L. G. (1996). Factor analytic evidence for the construct validity of scores: A historical overview and some guidelines. Educational and Psychological Measurement 56: 197–208.Google Scholar
  33. Wilkinson, L. & the Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist 54: 594–604.Google Scholar
  34. Witcher, A., Onwuegbuzie, A. J., & Minor, L. (2001). Characteristics of effective teachers: Perceptions of preservice teachers. Research in the Schools 8: 45–57.Google Scholar

Copyright information

© Kluwer Academic Publishers 2003

Authors and Affiliations

  • Anthony J. Onwuegbuzie
    • 1
  1. 1.Howard UniversityUSA

Personalised recommendations