Higher Education

, Volume 74, Issue 4, pp 669–685 | Cite as

The improvement of student teachers’ instructional quality during a 15-week field experience: a latent multimethod change analysis

Article

Abstract

Most studies evaluating the effectiveness of school internships have relied on self-assessments that are prone to self-presentational distortions. Therefore, the present study analyzed the improvement in the instructional quality of 102 student teachers (46 women) from a German university during a 15-week internship at a local secondary school across three rating sources: the student teachers themselves, their students, and their mentors (experienced teachers). A latent multimethod change analysis identified a significant increase in instructional quality during the practice semester. However, ratings from the three informant groups only marginally converged.

Keywords

Theory and practice Practical experience Teacher education Multimethod change Analysis Instructional quality 

References

  1. Allen, J. M., & Wright, S. E. (2014). Integrating theory and practice in the pre-service teacher education practicum. Teachers and Teaching, 20, 136–151. doi:10.1080/13540602.2013.848568.CrossRefGoogle Scholar
  2. Baumgartner, H., & Steenkamp, J. B. E. M. (2001). Response styles in marketing research: a cross-national investigation. J Mark Res, 38, 143–156. doi:10.1509/jmkr.38.2.143.18840.CrossRefGoogle Scholar
  3. Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychol Bull, 107, 238–246. doi:10.1037/0033-2909.107.2.238.CrossRefGoogle Scholar
  4. Benton, S. L., Duchon, D., & Pallett, W. H. (2013). Validity of student self-reported ratings of learning. Assessment & Evaluation in Higher Education, 38, 377–388. doi:10.1080/02602938.2011.636799.CrossRefGoogle Scholar
  5. Berk, R. A. (2005). Survey of 12 strategies to measure teaching effectiveness. International Journal of Teaching and Learning in Higher Education, 17, 48–62.Google Scholar
  6. Besa, S., & Büdcher, M. (2014). Empirical evidence on field experiences in teacher education. In K.-H. Arnold, A. Gröschner, & T. Hascher (Eds.), Schulpraktika in der Lehrerbildung: Theoretische Konzeptionen, Grundlagen und Effekte (pp. 129–145). Münster, Germany: Waxmann.Google Scholar
  7. Bodensohn, R., & Schneider, C. (2008). Was nützen Praktika? Evaluation der Block-Praktika im Lehramt – Erträge und offene Fragen nach sechs Jahren. Empirische Pädagogik, 22(3), 274–304.Google Scholar
  8. Buchberger, F., Campos, B. P., Kallos, D., & Stephenson, J. (2000). Green paper on teacher education in Europe. Umeå, Sweden: Thematic Network on Teacher Education in Europe.Google Scholar
  9. Cohen, P. A. (1981). Student ratings of instruction and student achievement: a meta-analysis of multisection validity studies. Rev Educ Res, 51, 281–309. doi:10.3102/00346543051003281.CrossRefGoogle Scholar
  10. Cohen, E., Hoz, R., & Kaplan, H. (2013). The practicum in preservice teacher education: a review of empirical studies. Teach Educ, 24, 345–380. doi:10.1080/10476210.2012.711815.CrossRefGoogle Scholar
  11. Danielson, C. (2013). The framework for teaching evaluation instrument. Princeton, NJ: Danielson Group.Google Scholar
  12. Darling-Hammond, L., & Lieberman, A. (2013). Teacher education around the world: changing policies and practices. London, England: Routledge.Google Scholar
  13. Davis, B. G. (2009). Tools for teaching. Hoboken, NJ: John Wiley & Sons.Google Scholar
  14. Enders, C. K., & Bandalos, D. L. (2001). The relative performance of full information maximum likelihood estimation for missing data in structural equation models. Struct Equ Model, 8, 430–457. doi:10.1207/S15328007SEM0803_5.CrossRefGoogle Scholar
  15. Feldman, K. A. (1989a). The association between student ratings of specific instructional dimensions and student achievement: refining and extending the synthesis of data from multisection validity studies. Res High Educ, 30, 583–645. doi:10.1007/BF00992392.CrossRefGoogle Scholar
  16. Feldman, K. A. (1989b). Instructional effectiveness of college teachers as judged by teachers themselves, current and former students, colleagues, administrators, and external (neutral) observers. Res High Educ, 30, 137–194. doi:10.1007/BF00992716.CrossRefGoogle Scholar
  17. Feldman, K. A. (2007). Identifying exemplary teachers and teaching: evidence from student ratings. In R. P. Perry & J. C. Smart (Eds.), The scholarship of teaching and learning in higher education: an evidence-based perspective (pp. 93–143). New York, NY: Springer.CrossRefGoogle Scholar
  18. Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CA: Stanford university press.Google Scholar
  19. Follman, J. (1995). Elementary public school pupil rating of teacher effectiveness. Child Study Journal, 25, 57–78.Google Scholar
  20. Geiser, C., & Lockhart, G. (2012). A comparison of four approaches to account for method effects in latent state-trait analyses. Psychol Methods, 17, 255–283. doi:10.1037/a0026977.CrossRefGoogle Scholar
  21. Geiser, C., Eid, M., Nussbeck, F. W., Courvoisier, D. S., & Cole, D. A. (2010). Multitrait-multimethod change modelling. Adv Stat Anal, 94, 185–201. doi:10.1007/s10182-010-0127-0.CrossRefGoogle Scholar
  22. Gnambs, T. (2013). The elusive general factor of personality: the acquaintance effect. Eur J Personal, 27, 507–520. doi:10.1002/per.1933.Google Scholar
  23. Gnambs, T. (2014). A meta-analysis of dependability coefficients (test-retest reliabilities) for measures of the Big Five. J Res Pers, 52, 20–28. doi:10.1016/j.jrp.2014.06.003.CrossRefGoogle Scholar
  24. Gnambs, T. (2015). Facets of measurement error for scores of the Big Five: three reliability generalizations. Personal Individ Differ, 84, 84–89. doi:10.1016/j.paid.2014.08.019.CrossRefGoogle Scholar
  25. Gnambs, T., & Kaspar, K. (2015). Disclosure of sensitive behaviors across self-administered survey modes: a meta-analysis. Behav Res Methods, 47, 1237–1259. doi:10.1177/1073191115624547.CrossRefGoogle Scholar
  26. Gnambs, T., & Kaspar, K. (2016). Socially desirable responding in web-based questionnaires: a meta-analytic review of the candor hypothesis. Assessment. Advance online publication. doi:10.3758/s13428-014-0533-4.
  27. Gröschner, A., Schmitt, C., & Seidel, T. (2013). Veränderung subjektiver Kompetenzeinschätzungen von Lehramtsstudierenden im Praxissemester [Changes in student teachers‘ competence self-ratings during the practice semester]. Zeitschrift für Pädagogische Psychologie, 27, 77–86. doi:10.1024/1010-0652/a000090.CrossRefGoogle Scholar
  28. Grossman, P., Loeb, S., Cohen, J., & Wyckoff, J. (2013). Measure for measure: the relationship between measures of instructional practice in middle school English language arts and teachers’ value-added scores. Am J Educ, 119, 445–470. doi:10.1086/669901.CrossRefGoogle Scholar
  29. Hascher, T. (2011). Vom «Mythos Praktikum» … und der Gefahr verpasster Lerngelegenheiten [On the mythical power of field experience … and the danger of missing out on educational opportunities]. Journal für Lehrerinnen- und Lehrerbildung, 3, 8–16.Google Scholar
  30. Hattie, J. A. C. (2009). Visible learning: a synthesis of 800+ meta-analyses on achievement. Abingdon, England: Routledge.Google Scholar
  31. Hayes, A. F., & Cai, L. (2007). Using heteroskedasticity-consistent standard error estimators in OLS regression: an introduction and software implementation. Behav Res Methods, 39, 709–722. doi:10.3758/BF03192961.CrossRefGoogle Scholar
  32. Helmke, A. (2010). Unterrichtsqualität und Lehrerprofessionalität. Diagnose, Evaluation und Verbesserung des Unterrichts [Instructional quality and teacher professionalism] (3rd ed.). Seelze, Germany: Klett-Kallmeyer.Google Scholar
  33. Holtz, P. (2014). „Es heißt ja auch Praxissemester und nicht Theoriesemester“: Quantitative und qualitative Befunde zum Spannungsfeld zwischen Theorie und Praxis im Jenaer Praxissemester [“it is called ‘practice semester’ and not ‘theory semester’”: Quantitative and qualitative findings on the tension between theory and practice in the Jena practice semester]. In: A. K. Kleinespel (eds.), Ein Praxissemester in der Lehrerbildung: Konzepte, Befunde und Entwicklungsprozesse im Jenaer Modell der Lehrerbildung [a practice semester in teacher education …], 97–118. Bad Heilbrunn: Klinkhardt.Google Scholar
  34. Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Model, 6, 1–55. doi:10.1080/10705519909540118.CrossRefGoogle Scholar
  35. Iannaccone, L., & Button, H. W. (1964). Functions of student teaching: Attitude formation and initiation in elementary student teaching (No. 1026). Washington University: Graduate Institute of Education.Google Scholar
  36. Janssen, O., & Van der Vegt, G. S. (2011). Positivity bias in employees’ self-ratings of performance relative to supervisor ratings: the roles of performance type, performance-approach goal orientation, and perceived influence. European Journal of Work and Organizational Psychology, 20, 524–552. doi:10.1080/1359432X.2010.485736.CrossRefGoogle Scholar
  37. Keller-Margulis, M. A. (2012). Fidelity of implementation framework: a critical need for response to intervention models. Psychol Sch, 49, 342–352. doi:10.1002/pits.21602.CrossRefGoogle Scholar
  38. Kleinespel, K. (2014). Ein Praxissemester in der Lehrerbildung: Konzepte, Befunde und Entwicklungsperspektiven am Beispiel des Jenaer Modells der Lehrerbildung [The Jena practice semester: ideas, results, and perspectives]. Bad Heilbrunn, Germany: Klinkhardt.Google Scholar
  39. Koch, T., Schultze, M., Eid, M., & Geiser, C. (2014). A longitudinal multilevel CFA-MTMM model for interchangeable and structurally different methods. Frontiers in Psychology, 5, Article 311. doi:10.3389/fpsyg.2014.00311Google Scholar
  40. Kopp, B. (2011). Neuropsychologists must keep their eyes on the reliability of difference measures. J Int Neuropsychol Soc, 17, 562–563. doi:10.1017/S1355617711000361.CrossRefGoogle Scholar
  41. Korthagen, F., Loughran, J., & Russell, T. (2006). Developing fundamental principles for teacher education programs and practices. Teach Teach Educ, 22(8), 1020–1041.Google Scholar
  42. Kyriakides, L. (2005). Drawing from teacher effectiveness research and research into teacher interpersonal behaviour to establish a teacher evaluation system: a study on the use of student ratings to evaluate teacher behaviour. J Classr Interact, 40, 44–66.Google Scholar
  43. Little, T. D. (2013). Longitudinal structural equation modeling. New York, NY: Guilford Press.Google Scholar
  44. Little, T. D., Cunningham, W. A., Shahar, G., & Widaman, K. F. (2002). To parcel or not to parcel: exploring the question and weighing the merits. Struct Equ Model, 9, 151–173. doi:10.1207/S15328007SEM0902_1.CrossRefGoogle Scholar
  45. Marsh, H. W., & Roche, L. A. (1997). Making students’ evaluations of teaching effectiveness effective: the critical issues of validity, bias, and utility. Am Psychol, 52, 1187–1197. doi:10.1037/0003-066X.52.11.1187.CrossRefGoogle Scholar
  46. McArdle, J. J. (2001). A latent difference score approach to longitudinal dynamic structural analyses. In R. Cudeck, S. du Toit, & D. Sorbom (Eds.), Structural equation modeling: present and future (pp. 342–380). Lincolnwood, IL: Scientific Software International.Google Scholar
  47. McArdle, J. J. (2009). Latent variable modeling of differences and changes with longitudinal data. Annu Rev Psychol, 60, 577–605. doi:10.1146/annurev.psych.60.110707.163612.CrossRefGoogle Scholar
  48. MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychol Methods, 1, 130–149. doi:10.1037//1082-989X.1.2.130.CrossRefGoogle Scholar
  49. McKeachie, W. J. (1997). Student ratings: the validity of use. Am Psychol, 52, 1218–1225. doi:10.1037/0003-066X.52.11.1218.CrossRefGoogle Scholar
  50. Müller, K. (2010). Das Praxisjahr in der Lehrerbildung [A practice year in teacher education]. Bad Heilbrunn, Germany: Klinkhardt.Google Scholar
  51. Muthén, L. K., & Muthén, B. O. (1998-2012). Mplus user’s guide (7th ed.). Los Angeles, CA: Muthén & Muthén.Google Scholar
  52. Nasser, F., & Fresko, B. (2006). Predicting student ratings: the relationship between actual student ratings and instructors’ predictions. Assessment & Evaluation in Higher Education, 31, 1–18. doi:10.1080/02602930500262338.CrossRefGoogle Scholar
  53. Newman, D. A. (2003). Longitudinal modeling with randomly and systematically missing data: a simulation of ad hoc, maximum likelihood, and multiple imputation techniques. Organ Res Methods, 6, 328–362. doi:10.1177/1094428103254673.CrossRefGoogle Scholar
  54. Oeberst, A., Haberstroh, S., & Gnambs, T. (2015). Not really the same: computerized and real lotteries in decision making research. Comput Hum Behav, 44, 250–257. doi:10.1016/j.chb.2014.10.060.CrossRefGoogle Scholar
  55. Paulhus, D. L., & John, O. P. (1998). Egoistic and moralistic biases in self-perception: the interplay of self-deceptive styles with basic traits and motives. J Pers, 66, 1025–1060. doi:10.1111/1467-6494.00041.CrossRefGoogle Scholar
  56. Peter, J. P., Churchill Jr., G. A., & Brown, T. J. (1993). Caution in the use of difference scores in consumer research. J Consum Res, 19, 655–662. doi:10.1086/209329.CrossRefGoogle Scholar
  57. Peterson, K. D., Wahlquist, C., & Bone, K. (2000). Student surveys for school teacher evaluation. J Pers Eval Educ, 14, 135–153. doi:10.1023/A:1008102519702.CrossRefGoogle Scholar
  58. Pianta, R. C., Karen, M., Paro, L., & Hamre, B. K. (2008). Classroom Assessment Scoring System (CLASS) manual: K-3. Baltimore, MD: Paul H. Brookes Publishing Company.Google Scholar
  59. Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of method bias in social science research and recommendations on how to control it. Annu Rev Psychol, 63, 539–569. doi:10.1146/annurev-psych-120710-100452.CrossRefGoogle Scholar
  60. Praetorius, A. K., Pauli, C., Reusser, K., Rakoczy, K., & Klieme, E. (2014). One lesson is all you need? Stability of instructional quality across lessons. Learn Instr, 31, 2–12. doi:10.1016/j.learninstruc.2013.12.002.CrossRefGoogle Scholar
  61. Ronfeldt, M. (2012). Where should student teachers learn to teach? Effects of field placement school characteristics on teacher retention and effectiveness. Educational Evaluation and Policy Analysis, 34(1), 3–26. doi:10.3102/0162373711420865.CrossRefGoogle Scholar
  62. Sandgren, D. L., & Schmidt, L. G. (1956). Does practice teaching change attitudes toward teaching?. J Educ Res, 49(9), 673–680.Google Scholar
  63. Schermelleh-Engel, K., Moosbrugger, H., & Müller, H. (2003). Evaluating the fit of structural equation models: test of significance and descriptive goodness-of-fit measures. Methods of Psychological Research Online, 8, 23–74.Google Scholar
  64. Sorenson, G. (1967). What is learned in practice teaching?. J Teach Educ, 18(2), 173–178.Google Scholar
  65. Spooren, P., Brockx, B., & Mortelmans, D. (2013). On the validity of student evaluation of teaching the state of the art. Rev Educ Res, 83(4), 598–642. doi:10.3102/0034654313496870.CrossRefGoogle Scholar
  66. Steiger, J. H. (1990). Structural model evaluation and modification: an interval estimation approach. Multivariate Behavior Research, 25, 173–180. doi:10.1207/s15327906mbr2502_4.CrossRefGoogle Scholar
  67. Steyer, R., Eid, M., & Schwenkmezger, P. (1998). Modeling true intraindividual change: true change as a latent variable. Methods of Psychological Research Online, 2, 21–33.Google Scholar
  68. Stronge, J. H., & Ostrander, L. P. (1997). Client surveys in teacher evaluation. In J. H. Stronge (Ed.), Evaluating teaching: A guide to current thinking and best practice (pp. 129–161). Thousand Oaks, CA.: Corwin Press.Google Scholar
  69. Tabachnick, B. R., & Zeichner, K. M. (1984). The impact of the student teaching experience on the development of teacher perspectives. J Teach Educ, 35, 28–36. doi:10.1177/002248718403500608.CrossRefGoogle Scholar
  70. Wagner, W., Göllner, R., Helmke, A., Trautwein, U., & Lüdtke, O. (2013). Construct validity of student perceptions of instructional quality is high, but not perfect: dimensionality and generalizability of domain-independent assessments. Learn Instr, 28, 1–11. doi:10.1016/j.learninstruc.2013.03.003.CrossRefGoogle Scholar
  71. Walkington, C., Arora, P., Ihorn, S., Gordon, J., Walker, M., Abraham, L., & Marder, M. (2012). Development of the UTeach observation protocol: A classroom observation instrument to evaluate mathematics and science teachers from the UTeach preparation program .Retrieved from https://utop.uteach.utexas.edu/
  72. Wilson, S. M., & Floden, R. E. (2003). Creating effective teachers: concise answers for hard questions. New York, NY: AACTE Publications.Google Scholar
  73. Zeichner, K. (2010). Rethinking the connections between campus courses and field experiences in college- and university-based teacher education. J Teach Educ, 61, 88–99. doi:10.1177/0022487109347671.CrossRefGoogle Scholar
  74. Zeichner, K., Payne, K., & Brayko, K. (2012). Democratizing knowledge in university teacher education through practice-based methods teaching and mediated field experience in schools and communities (issue paper 12-1). Seattle, WA: University of Washington.Google Scholar
  75. Yuan, K., & Bentler, P. M. (2000). Three likelihood-based methods for mean and covariance structure analysis with nonnormal missing data. Sociol Methodol, 30, 167–202. doi:10.1111/0081-1750.00078.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2016

Authors and Affiliations

  1. 1.Leibniz-Institut für Wissensmedien IWM (Knowledge Media Research Center)TübingenGermany
  2. 2.Leibniz-Institut für Bildungsverläufe LIfBi (Leibniz Institute for Educational Trajectories)BambergGermany

Personalised recommendations