Advertisement

Some Methodological and Statistical “Bugs” in Research on Children’s Learning

  • Joel R. Levin
Part of the Springer Series in Cognitive Development book series (SSCOG)

Abstract

This chapter provides me with the opportunity to discuss a number of methodological and statistical “bugs” that I have detected creeping into psychological research in general, and into research on children’s learning in particular. Naturally, one cannot hope to exterminate all such bugs with but a single essay. Rather, it is hoped that this chapter will leave a trail of pellets that is sufficiently odorific to get to the source of these potentially destructive little creatures. It also goes without saying that different people in this trade have different entomological lists that they would like to see presented. Although all cannot be presented here, I intend to introduce you to nearly 20 of my own personal favorites. At the same time, it must be stated at the outset that present space limitations do not permit a complete specification and resolution of the problems that these omnipresent bugs can create for cognitive-developmental researchers. Consequently, in most cases I will only allude to a problem and its potential remedies, placing the motivation for additional inquiry squarely in the lap of the curious reader.

Keywords

Error Probability Floor Effect Developmental Level American Educational Research Association Trend Component 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Appelbaum, M. I., & McCall, R. B. (1983). Design and analysis in developmental psychology. In P. H. Mussen (Ed.), Handbook of child psychology (4th ed.): Volume 1: History, theory, and methods (W. Kessen, Ed.). New York: Wiley.Google Scholar
  2. Baltes, P. B., Reese, H. W., & Nesselroade, J. R. (1977). Life-span developmental psychology: Introduction to research methods. Monterey: CA: Brooks/Cole.Google Scholar
  3. Barber, T. X. (1973). Pitfalls in research: Nine investigator and experimenter effects. In R. M. W. Travers (Ed.), Second handbook of research on teaching. Chicago: Rand McNally.Google Scholar
  4. Barcikowski, R. S. (1981). Statistical power with group mean as the unit of analysis. Journal of Educational Statistics, 6, 267–285.CrossRefGoogle Scholar
  5. Betz, M. A., & Levin, J. R (1982). Coherent analysis-of-variance hypothesis-testing strategies: A general model. Journal of Educational Statistics, 7, 192–206.CrossRefGoogle Scholar
  6. Bird, K. D., & Hadzi-Pavlovic, D. (1983). Simultaneous test procedures and the choice of a test statistic in MANOVA. Psychological Bulletin, 93, 167–178.CrossRefGoogle Scholar
  7. Bogartz, R S. (1976). On the meaning of statistical interactions. Journal of Experimental Child Psychology, 22, 178–183.CrossRefGoogle Scholar
  8. Brainerd, C. J., & Howe, M. L. (1980). Developmental invariance in a mathematical model of associative learning. Child Development, 51, 349–362.CrossRefGoogle Scholar
  9. Brown, J. S., & Burton, R R (1978). Diagnostic models for procedural bugs in basic mathematical skills. Cognitive Science, 2, 153–192.CrossRefGoogle Scholar
  10. Campbell, D. T., & Boruch, R F. (1975). Making the case for randomized assignment to treatments by considering the alternatives: Six ways in which quasi-experimental evaluations in compensatory education tend to underestimate effects. In C. A. Bennett & A. Lumsdaine (Eds.), Central issues in social program evaluation. New York: Academic Press.Google Scholar
  11. Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago: Rand McNally.Google Scholar
  12. Cohen, J. (1977). Statistical power analysis for the behavioral sciences (2nd ed.). New York: Academic Press.Google Scholar
  13. Cohen, J., & Cohen, P. (1975). Applied multiple regression correlation analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum.Google Scholar
  14. Cook, T. D., & Campbell, D. T. (1976). The design and conduct of quasi-experiments and true experiments in field settings. In M. D. Dunnette & J. P. Campbell (Eds.), Handbook of industrial and organizational research. Chicago: Rand McNally.Google Scholar
  15. Cronbach, L. J., & Snow, R E. (1977). Aptitudes and instructional methods. New York: Irvington.Google Scholar
  16. Elashoff, J. D. (1969). Analysis of covariance: A delicate instrument. American Educational Research Journal, 6, 381–401.Google Scholar
  17. Evans, S. H., & Anastasio, E. J. (1968). Misuse of analysis of covariance when treatment effect and covariate are confounded. Psychological Bulletin, 69, 225–234.PubMedCrossRefGoogle Scholar
  18. Gabriel, K. R. (1969). Simultaneous test procedures: Some theory of multiple comparisons. Annals of Mathematical Statistics, 40, 224–250.CrossRefGoogle Scholar
  19. Gaito, J. (1965). Unequal intervals and unequal n in trend analyses. Psychological Bulletin, 63, 125–127.PubMedCrossRefGoogle Scholar
  20. Games, P. A. (1978). Nesting, crossing, Type IV errors and the role of statistical models. American Educational Research Journal, 15, 253–258.Google Scholar
  21. Glass, G. V. (1977). Integrating findings: The meta-analysis of research. In L. S. Shulman (Ed.), Review of Research in Education (Vol. 5). Itasca, IL: Peacock.Google Scholar
  22. Glass, G. V., Peckham, P. D., & Sanders, J. R (1972). Consequences of failure to meet assumptions underlying the fixed effects analyses of variance and covariance. Review of Educational Research, 42, 237–288.Google Scholar
  23. Green, D. M., & Swets, J. A. (1966). Signal detection theory andpsychophysics. New York: Wiley.Google Scholar
  24. Greenwald, A. G. (1975). Consequences of prejudice against the null hypothesis. Psychological Bulletin, 82, 1–20.CrossRefGoogle Scholar
  25. Huitema, B. E. (1980). The analysis of covariance and alternatives. New York: Wiley.Google Scholar
  26. Humphreys, L. G. (1980). The statistics of failure to replicate: A comment on Buriel’s (1978) conclusions. Journal of Educational Psychology, 72, 71–75.CrossRefGoogle Scholar
  27. Jaccard, J., Becker, M. A., & Wood, G. (1984). Pairwise multiple comparison procedures: A review. Psychological Bulletin, 96, 589–596.CrossRefGoogle Scholar
  28. Jensen, A. R. (1979, April). Personal communication.Google Scholar
  29. Kirk, R. E. (1982). Experimental design (2nd ed.). Belmont: CA: Brooks/Cole.Google Scholar
  30. Levin, J. R. (1975). Determining sample size for planned and post hoc analysis of variance comparisons. Journal of Educational Measurement, 12, 99–108.CrossRefGoogle Scholar
  31. Levin, J. R. (1977, April). Data analysis by the numbers. Paper presented at the annual meeting of the American Educational Research Association, New York.Google Scholar
  32. Levin, J. R., Marascuilo, L. A., & Hubert, L. J. (1978) N= nonparametric randomization tests. In T. R. Kratochwill (Ed.), Single subject research: Strategies for evaluating change. New York: Academic Press.Google Scholar
  33. Levin, J. R., & Peterson, P. L. (1984). Classroom aptitude-by-treatment interactions: An alternative analysis strategy. Educational Psychologist 19, 43–47.CrossRefGoogle Scholar
  34. Levin, J. R., & Pressley, M. (1983). Understanding mnemonic imagery effects: A dozen “obvious” outcomes. In M. L. Fleming & D. W. Hutton (Eds.), Mental imagery and learning. Englewood Cliffs, NJ: Educational Technology Publications.Google Scholar
  35. Levin, J. R, Pressley, M., McCormick, C. B., Miller, G. E., & Shriberg, L. K. (1979). Assessing the classroom potential of the keyword method. Journal of Educational Psychology, 71, 583–594.CrossRefGoogle Scholar
  36. Loftus, G. R. (1978). On interpretation of interactions. Memory & Cognition, 6, 312–319.CrossRefGoogle Scholar
  37. Lord, F. M. (1967). A paradox in the interpretation of group comparisons. Psychological Bulletin, 68, 304–305.PubMedCrossRefGoogle Scholar
  38. Lubin, A. (1961). The interpretation of significant interaction. Educational and Psychological Measurement, 21, 807–817.CrossRefGoogle Scholar
  39. Marascuilo, L. A., & Levin, J. R. (1970). Appropriate post hoc comparisons for interaction and nested hypotheses in analysis of variance designs: The elimination of Type IV errors. American Educational Research Journal, 7, 397–421.Google Scholar
  40. Marascuilo, L. A., & Levin, J. R. (1976). A note on the simultaneous investigation of interaction and nested hypotheses in two-factor analysis of variance designs. American Educational Research Journal, 13, 61–65.Google Scholar
  41. Marascuilo, L. A., & Levin, J. R. (1983). Multivariate statistics in the social sciences: A researcher’s guide. Monterey, CA: Brooks/Cole.Google Scholar
  42. Marascuilo, L. A., & McSweeney, M. (1977). Nonparametric and distribution-free methods for the social sciences. Monterey, CA: Brooks/Cole.Google Scholar
  43. Maxwell, S. E., Delaney, H. D., & Dill, C. A. Another look at ANCOVA versus blocking. (1984). Psychological Bulletin, 95, 136–147.CrossRefGoogle Scholar
  44. McCall, R B., & Appelbaum, M. I. (1973). Bias in the analysis of repeated-measures designs: Some alternative approaches. Child Development, 44, 401–415.CrossRefGoogle Scholar
  45. Morrison, D. E., & Henkel, R. E. (Eds.). (1970). The significance test controversy. Chicago: Aldine.Google Scholar
  46. Olson, C. L. (1974). Comparative robustness of six tests in multivariate analysis of variance. Journal of the American Statistical Association, 69, 894–908.CrossRefGoogle Scholar
  47. Overall, J. E., & Spiegel, D. K. (1969). Concerning least squares analysis of experimental data. Psychological Bulletin, 72, 311–322.CrossRefGoogle Scholar
  48. Page, E. B. (1965, February). Recapturing the richness within the classroom. Paper presented at the annual meeting of the American Educational Research Association, Chicago.Google Scholar
  49. Peckham, P. D., Glass, G. V., & Hopkins, K. D. (1969). The experimental unit in statistical analysis. Journal of Special Education, 3, 337–349.CrossRefGoogle Scholar
  50. Petrinovich, L., & Hardyck, C. D. (1969). Error rates for multiple comparison methods: Some evidence concerning the frequency of erroneous conclusions. Psychological Bulletin, 71, 43–54.CrossRefGoogle Scholar
  51. Porter, A. C., & Chibucos, T. R (1974). Analysis issues in summative evaluation. In G. Borich (Ed.), Evaluating educational programs and products. Englewood Cliffs, NJ: Educational Technology Press.Google Scholar
  52. Rogan, J. C., Keselman, H. J., & Mendoza, J. L. (1979). Analysis of repeated measurements. British Journal of Mathematical and Statistical Psychology, 32, 269–286.CrossRefGoogle Scholar
  53. Rohwer, W. D., Jr. (1973). Elaboration and learning in childhood and adolescence: In H. W. Reese (Ed), Advances in child development and behavior (Vol. 8). New York: Academic Press.Google Scholar
  54. Romaniuk, J. G., Levin, J. R, & Hubert, L. J. (1977). Hypothesis-testing procedures in repeated-measures designs: On the road map not taken. Child Development, 48, 1757–1760.CrossRefGoogle Scholar
  55. Salthouse, T. A., & Kausler, D. H. (1985). Memory methodology in maturity. In C. J. Brainerd & M. Pressley (Eds.), Basic processes in memory development. New York: Springer-Verlag.Google Scholar
  56. Serlin, R C., & Lapsley, D. K. (1983, April). Rationality in psychological research: The good-enough principle. Paper presented at the annual meeting of the American Educational Research Association, Montreal, Canada.Google Scholar
  57. Serlin, R C., & Lapsley, D. K. (1984). A unified framework for hypothesis testing. Unpublished manuscript, Department of Educational Psychology, University of Wisconsin, Madison.Google Scholar
  58. Tomarken, A. J., & Serlin, R. C. (in press). A comparison of ANOVA alternatives under variance heterogeneity and with specific noncentrality structures. Psychological Bulletin.Google Scholar
  59. Walster, G. W., & Cleary, T. A. (1970). Statistical significance as a decision-making rule. In E. F. Borgatta & G. W. Bohrnstedt (Eds.), Sociological methodology. San Francisco: Jossey-Bass.Google Scholar
  60. Winer, B. J. (1971). Statistical principles in experimental design (2nd ed.). New York: McGraw-Hill.Google Scholar

Copyright information

© Springer-Verlag New York Inc. 1985

Authors and Affiliations

  • Joel R. Levin

There are no affiliations available

Personalised recommendations